You are on page 1of 19

Subject Name: Distributed Systems Subject Code: ECS-801

Q1. How the resources sharing done in distributed system? Explain with an
example. 10 Marks UPTU/MTU-2009/10/11
Ans. With distributed system, it is easier for users to access remote resources and
to share resources with other users. Example, printers, files, web pages etc.
• A distributed system should also make it easier for users to exchange
information.
• Easier resource and data exchange could cause security problems a
distributed system should deal with this problem.
• Resources in a distributed system are physically encapsulated within
computers and can only be accessed from other computers by communication
interface enabling the resources to be accessed and updated reliably and
consistently. For example, users are concerned with sharing data in the form
of a shared database or a set of web pages, not the disks and processors that
those are implemented on. Similarly user think in terms of shared resources
such as search engine or a currency converter, without regard for the server
or server that provide there.
Another example, a search engine on the web provides a facility to users
through out the world.
Distributed system is used to describe a system with the fallowing
characteristics. It consists of several computers that do not share a memory or a
clock.
For example

Challenges of D.S : The main challengies of D.S. are


• Resource sharing
• Enhanced Performance
• Improved Reliability .
• Improved Availability
• Modular Expendability

Q2. Discuss the limitation of distributed system. 5 Marks UPTU/MTU-2009


Ans: Global knowledge is not readly available, since it is impossible to collect upto
date information of global state of distributed system.
Because of (limitation):
• Lack of global clock (common clock)
• Unprecedented message delay
S1: A S2:B
Case 1: Global state = local state SI + Local state S2
= 450 + 200
= 650 = Rs 50 missing i.e., in-coherent system Case 2: Global state = Local A
state + channel state + Local B state

Q3. What do you mean by Global state of the distributed system? Also explain
the main features of consistent Global state. 10 Marks UPTU/MTU-
2009/10/11/12
Ans: . Global State: “The global state of a distributed system is the set oflocal
states of all individual processes involved in the computation plus the state of the
communication channels.” Consistent Global state: A global state of a distributed
system is consistent if no transactions are in progress.
• A global state is consistent if it could have been observed by an external
observer.
• If e -» e then it is never the care that e' is observed by the external observer
and
not .

• All fearible states are consistent.


An Example,

• Non-existence of physically shared memory Above limitation create difficulty


in obtaining coherent global state.
For example,
Q4. What are logical clocks?'Why does a logical clock need ti> be
implemented in distributed system? Consider the following space time
diagram for two processes P3 and P2

Ans: A logical clock is a mechanism for capturing chronological and causal


relationships in a distributed system. Logical . clock algorithms of note are:
• Lamport timestamps, which are monotohyrally increasing software counters '
• Vector clocks, that allow for total ordering of events in a distributed system,
• Matrix clocks, an extension of vector clocks that also contains information
about other processes' views of the system.

Q5. What do you mean by Casual Ordering of messages? Discuss the salient
features of Broadcast based protocol
that make the uses of Vector clock which ensures Causual Ordering of
messages. 10 Marks UPTU/MTU-2010/9/8/11
Ans: Causal ordering of messages:
purpose of causal ordering of messages is to insure that the same causal
relationship for the "message send" events correspond with "message receive"
events, (i.e. All the messages are processed in order that they were created.) Part-ii
Schiper-Eggli-Sandoz Protocol Instead of maintaining a vector dock based on the
number of messages sent to each process, the vector dock for this protocol can
increment at any rate it would like to and has no additional meaning related to the
number of messages currently outstanding.
Sending a message:
1. All messages are timestamped and sent out with a list of all the timestamps of
messages sent to other processes.
2. Locally store the timestamp that the message was sent with.
Receiving a message:
* A message cannot be delivered if there is a message mentioned in the list of
timestamps that predates this one.
• Otherwise, a message can be delivered, performing the following steps:
1. Merge in the list of timestamps from the message:

• Add knowledge of messages destined for other processes to our list of


processes if we didn't know about any other messages destined for one
already.
■ If the new list has a timestamp greater than one we already had stored,
update our timestamp to match.
2. Update the local logical dock.
3. Check all the local buffered messages to see if they can now be delivered.
Q6. What are the Token and Non-token based algorithm ? Explain Lamport's
algorithm with example. 10 Marks UPTU/MTU-2009/10/11/13/14
Ans: Ans. Token Vs nontoker. based algorithm: We use tokens to encapsulate
anything that needs to be shared by the team, including information, tasks and
resources. The tokens are efficiently routed through th ; team via the use of local
decision theoretic models. Each token is used to improve the routing of other
tokens leading to a dramatic performance improvement when the algorithms work
together.
Nontoken based algorithm : A station which requests a Critical Section (CS)
competes in order to be alone to use the unique channel dedicated to this CS. To
reach this goal, we derive a Markov process which guarantees that each station
will enter the CS. More precisely, we show that, in presence of collision detection
n/ln2 broadcast rounds are necessary in the average case to salisfy n (n unknown)
stations wishing to enter tie same CS
Lamport's algorithm: Pseudocode
/,' declaration and initial values of global variables
Entering: array |1..N] of bool = (false);
Number: array [1..N] of integer = (0);
J lockfinteger i) (
2 Entering(i) 11 true;
3 Number[i] = 1 + max(Number[l], ..., Number[N]);
4 Enteringli] = false;
3 for (j - 1; j <- N; j++) |
o // Waif until thread j receives its number:
7 while (Entering(jJ) ( /* nothing *t I
.3 // Wait until all threads with smaller numbers or v’ilh the same
4 j! number, but with higher priority, finish their work:
10 while ((Number[j) != 0) && ((Number[j), j) < (Number{i), i))) (
11 /* nothing */
12 }
13 |
14 |
15
15 unlockfinteger i) (
16 Number[ i] = 0;
17 |
19
18 Thread(integer i) (
19 while (true) |
20 lock(i);
21 // The critical section goes here...
22 unlock(i);
23 //ncn-criticaisection...
24 )
Q7. Explain the deadlock handling strategies in f distributed system. Explain
the control organization for Distributed deadlock detection.
Ans: . (i).Deadlock handling strategies in distributed system: Distributed
deadlocks can occur in distributed systems when distributed transactions or
concurrency control is being used. Distributed deadlocks can be detected either by
constructing a global wait-for graph, from local wait-for graphs at a deadlock
detector or by a distributed algorithm like edge chasing. Phantom deadlocks are
deadlocks that are detected in a distributed system.
Often neither deadlock avoidance nor deadlock prevention may be used. Instead
deadlock detection and process restart are used by employing an algorithm that
tracks resource allocation and process states, and rolls back and restarts one or
more of the processes in order to remove the deadlock. Detecting a deadlock that
has already occurred is easily possible since the resources that each process has
loekt d and/or currently requested are known to the resource scheduler or OS.
(ii). Control organization for distributed deadlock detection: edge-chasing is an
algorithm for deadlock detection in distributed systems. Whenever a process A is
blocked for some resource, a probe message is sent to all processes A may depend
on. The probe message contains the process id of A along with the path that the
message has followed through the distributed system. If a blocked process receives
the probe it will update the path information and forward the probe to all the
pr>cesses it depends on. Non-blocked processes may discard the probe.
If eventually the probe returns to process A, there is a circular waiting loop of
blocked processes, and a deadlock is detected. Efficiently detecting such cycles in
the "wait-for graph" of blocked processes is an important implementation problem.
Q8. A centralized global deadlock detector holds the union of local wait-for
graphs. Give an example to explain how a phantom deadlock could be
detected if a waiting transaction in a deadlock cycle abort during the
deadlock detection procedure.
Ans: Ans. A Wait-For Graph is a directed graph used for deadlock detection in
operating systems and relational database systems, a system that allows
concurrent operation of multiple processes and locking of resources and which
does not provide mechanisms to avoid or prevent deadlock must support a
mechanism to detect deadlocks and an algorithm for recovering from them..
when the wait-for graph is computed at a central location that collects messages
from the various systems involved in transactions, it is easy to find also false, or
spurious/ or phantom deadlocks as in the following picture
Coordinator when Q first releases S and then P requests S,

Q9. What are the shortcomings of Ramamoorthy's two phase algorithm for
deadlock detection? Show that Byzantim agreement cannot always be
reached among four processors if two processors are faulty.
■ Ans: Each site maintains a status table of all processes initiated at that site:
includes all resources locked & all resources being waited on.
■ Controller requests (periodically) the status table from each site.
■ Controller then constructs WFG from these tables, searches for cycle(s).
■ If no cycles, no deadlocks.
■ Otherwise, (cycle exists): Request for state tables again.
■ Construct WFG based only on common transactions in the 2 tables.
■ If the same cycle is detected again, system is in deadlock.
■ Later proved: cycles in 2 consecutive reports need not result in a deadlock.
Hence, this algorithm detects false deadlocks.
(ii). A Byzantine fault is an arbitrary fault that occurs during the execution of an
algorithm by a distributed system. It encompasses those faults that are commonly
referred to as "crash failures" and "send and omission failures".’ When a Byzantine
failure has occurred, the system may respond in any unpredictable way, unless it
is designed to have Byzantine fault tolerance.
These arbitrary failures may be loosely categorized as follows:
• a failure to take another step in the algorithm, also known as a crash failure;
• a failure to correctly execute a step of the algorithm; and
• arbitrary execution of a step other than the one indicated by the algorithm.

For example, if the output of one function is the input of another, then small
round-off errors in the first function can produce much larger errors in the second.
If the second function were fed into a third, the problem could grow even larger,
until the values produced are worthless. Another example is in compiling source
code. One minor syntactical error early on in the code can produce large numbers
of perceived errors later, as the compiler gets out-of-phase with the lexical and
syntactic information in the source program.
Steps are taken by processes, the abstractions that execute the algorithms. A
faulty process is one that at some point exhibits one of the above failures. A
process that is not faulty is correct.
The Byzantine failure assumption models real-world environments in which
computers and networks may behave in unexpected ways due to hardware
failures, network congestion and disconnection, as well as malicious attacks.
Byzantine failure-tolerant algorithms must cope with such failures and still satisfy
the specifications of the problems th^y are designed to solve. Such algorithms tire
commonly characterized by their resilience t, the number of faulty processes with
which an algorithm can cope.
Many classic agreement problems, such as the Byzantine Generals' Problem,
have no solution unless t < n / 3, where n is the number of processes in the
system.
Q10. What are the communication models proposed for the distributed
objects? Explain the concept of remote method invocation with a suitable
example.
Ans: Distributed objects are implemented in Objective-C using the 'Cocoa API
with the NSCcnneclion class and supporting objects.
• Distributed objects are used .n java RMI.
• CORBA lets one'build distributed mixed object systems.
• DCOM is a framework for distributed objects on the Microsoft platform.
• DDObjects is a framework for distributed objects using Borland Delphi.
• JavaSpaces is a Sun specification for a distributed, shared memory (spaces
based)
• Pyro is a framework for distributed objects using the Python
programminglanguage.
• Distributed Ruby (DRb) is a framework for distributed objects using the Ruby
programming language.
RM1 (Remote Method Invocation) is a way that a programmer, using the Java
programming language and development environment, can write object-oriented
programming in which objects on different computers can interact in a distributed
network. RMI is the Java version of what is generally known as a remote procedure
cali (RPC), but with the ability to pass one or more objects along with the request.
The object can include information that will change the service that is performed in
the remote computer. Sun Microsystems, the inventors of Java, calls this "moving
behavior." For example, when a riser at a remote computer fills out an expense
account, the Java program interacting with the user could communicate, using
RMI, with a Java program in another computer that always had the latest policy
about expense reporting. In reply, that program would send back an object and
associated method information that would enable the remote computer program to
screen the user's expense account data in a way that was consistent with the latest
policy. The user and the company both would save time by catching mistakes
early. Whenever the company policy changed, it would require a change to a
program in only one computer.
RMI is implemented as three layers:
• A stub program in the client side of the client/ server relationship, and a
corresponding skeleton at the server end. The stub appears to the calling
program to be the program being called for a service, (Sun uses the term proxy as
a synonym for stub.)
• A Remote Reference Layer that can behave differently depending on the
parameters passed by the calling program. For example, this layer can determine
whether the request is to call a single remote service or multiple remote programs
as in a multicast.
• A Transport Connection Layer, which sets up and manages the request,
• A single request travels down through the layers on one computer and up
through the layers at the other end.
Q11. Discuss how a public key scheme can be used to solve the key
distribution problem in a private key cryptographic scheme.
Ans, Key Distribution Scheme in Private key distribution: ItusesDiffie-Hellmar
key exchange (D-H) is a cryptographic protocol that allows two parties that have
no prior knowledge of each other to jointly establish a shared secret key over an
insecure communications channel. This key can then be used to encrypt
subsequent communications using a symmetric key cipher.
Synonyms of Diffie-Heltman key exchange include:
• Diffie-Hellman key agreement
• Diffie-Hellman key establishment
• Diffie-Hellman key negotiation
• Exponential key exchange
Heilman suggested the algorithm be called Diffie-Hellman-Merkle key exchange
to the invention of public-key cryptography.
Although Diffie-Hellman key agreement itself is an anonymous (r^or\-
authentic^ted) key- agreement protocol, it provides the basis for a variety of
authenticated protocols, end is used to provide perfect forward secrecy in
Transport Layer Security's ephemeral modes. Description

Diffie-Hellman key exchange: The simplest, and original, implementation of the


protocol uses the Multiplicative group of mtegers
Q12. ’Pie two-phase commit protocol is a centralized protocol where the
decision to abort or commit is taken by the co-ordinator. Design a
decentralized two-phase commit protocol where no site is designated to be a
co-ordinator.
Ans. Centralized Two phase commit protocol:
Two-phase commit is a standard protocol in distributed transactions for
rchieving ACID properties. Each transaction h is a coordinator who initiates and
coordinates the transaction.
In the two-phase commit the coordinator sends a prepare message to all
participants (nodes) and waits for their answers. The coordinator then sends their
answers to all other sites. Every participant waits for these answers from the
coordinator before committing to or aborting th? transaction. If committing, the
coordinator records this into a log and sends a commit message to all participants.
If for any reason a participant aborts the process, the coordinator sends a rollback
message and the transaction is undone using the log file created earlier. The
advantages of this are all participants reach a decision consistently, yet
independently.
However, the two-phase commit protocol also has limitations in that it is a
blocking protocol . For example, participants will block resource processes while
waiting for a message from the Coordinator. If for any reason this fails, the
participant will continue to wait and may never resolve its transaction. Therefore
the resource could be blocked indefinitely. On the other hand, a coordinator will
also block resources while waiting for replies from participants. In this case, a
coordinator can also block indefinitely if no acknowledgement is received from the
narticipant To reduce blocking, we propose a backup commit (BC) protocol by
attaching multiple backup sites to the coordinator site. In this protocol, after
receiving responses from all participants in the first phase, the coordinator
communicates the final decision to the backup sites in the backup phase.
Afterwards, it sends the final decision to the participants. When blocking occurs
due to the failure of the coordinator site, the participant sites can terminate the
transaction by consulting a backup site of the coordinator. In this way, the BC
protocol achieves non-blocking property in most of the coordinator site failures.
The BC protocol suits best for World Wide Web (or Internet) environments where
a server has to face high rush of electronic commerce transactions that involve
multiple participants. Also in the Internet environment, sites fail frequently and
messages take longer delivery time. In this situation with extra hardware, the BC
protocol reduces the blocking problem without involving expensive communication
cycle as compared to 3PC. Through simulation experiments it has been shown that
the BC protocol exhibits superior throughput and response time performance over
the 3PC protocol and performs closely with the 2PC protocol.

Q13. Describe how a non-recoverable situation could arise if write locks are released
after the last operation of a transaction but before its commitment
Ans. Non Recoverable situation arise if write locks are released; The present
invention provides a system that supports'recovery in he event a previous process
holding a lock used to- mutual exclusion purposes loses ownership of the lock.
This loss of ownership may occur due to the previous process dying or the lock
becoming unmapped. Under the present invention a process first attempts to
acquire the lock. If the attempt to acquire the lock returns wilh an error indicating
that the previous process holding the lock lost ownership of the lock, the process
attempts to make program state protected by the lock consistent. If the attempt to
make the program state consistent is successful, the system reinitializes and
unlocks the lock. Otherwise, the system marks the lock as unrecoverable so that
subsequent processes attempting to acquire the lock are notified that the lock is
not recoverable. One aspect of the present invertion includes receiving a
notification in an operating system that a process died, and dete mining if the
process died while holding a lock. If the process died while holding the lock, the
system marks the lock to indicate to subsequent acquirers of the lock that a
previous holder of the lock died, and unlocks the lock so that other processes may
acquire the lock. According to one aspect of the present invention, if the attempt o
acquire the lock returns with an error indicating the lock is not recoverable, the
process performs operations to work around the program state that is inconsistent,
and reinitializes the lock.
Write ahead togging (WAL) is a family of techniques for providing atomicity and
durability (two of the ACID properties) in database systems. In a system using
WAL, all modifications are written to a log before they are applied. Usually both
redo and undo information is stored in the log. The purpose of this can be
illustrated by an example. Imagine a program that is in the middle of performing
some operation when the machine it is running on loses power. Upon restart, that
program might well need to know whether the operation it was performing
succeeded, half-:ucceeded, or failed. If a write-ahead log were used, the program
could check this tog and compare what it was supposed to be doing when it
unexpectedly lost power to what was actually done. Based on this comparison, the
program could decide to undo what it had started, complete what it had started, or
keep things as they are.
Q14. Discuss the web challenges for implementing distributed system.
Ans. These are following:
1. Heterogeneity 2 Openness
3. Security
4. Stability
5. Failure handling
6. Concurrency
7. Transparency
1. Heterogeneity: We set up protocols to solve these heterogeneities.
• Middle ware: A software layer that provides a programming abstraction as
well as marking the heterogeneity.
• Mobile code: Code that can be sent from one to another computer and run
at destination.
2. Openness: Open DS can be constructed from heterogenous hardware and
software.
3. Security: Security for information resources has three components:
confidentiality, integrity, availability.
4. Scalability: A system is described as scalable if it remains effective when
there is a significant increases in the number of resources and the number of
users.
Failure Handling:
Techniques for dealing with failures
• Detecting failures
• Masking failures
• Tolerating failures
• Recovering from failures
• Redundancy
6. Concurrency: “There is a possibility that several clients will attempt to
access a shared resources at the same time.”
7. Transparency: Eight forms of transparency:
• Access transparency
• Location transparency
• Concurrency transparency
• Replication transparency
• Failure transparency
• Mobility transparency
• Performance transparency
• Scaling transparency “Transparency is defined as the concealment
from the user and application progammer of the separation of components in a
distributed system, so that the system is provided as a whole rather than as a
collection of independent components”.

Q15. Define deadlocks. Differentiate between resource and communication


Deadlocks. Discuss various deadlock handling strategies in detail.
Ans. Deadlocks occurs when each transaction T in a set of two or more transaction
is waiting for some item that is locked by some other transaction T in the set.
Hence, each transaction in the set is on a waiting queue, waiting for one of the
other transaction in the set to release the lock on an item.
Example: A simple examples where the two transaction 7*,' and T2 are
deadlocked in a partial schedule T,' is on the waiting queue for A", which is locked
by T2. While T2 is on the waiting queue for_y which is located by T}\ Mean while,
neither Tx and T2 nor other transaction can access items X and Y.
(a) A partial schedule of 7",' and T2' that is in a state of deadlock.
(b) A wait for graph for the partial schedule m(a). Resource and
Communication Deadlock
Deadlock due to resource, here set of process are under resource deadlock: It
every process is waiting for resources held by process in a same set and it must
receive all resource when process become unblock
* While deadlock due to communication: Here set of process are said to be in
communication deadlock if every process is waiting for message from other
process, in the same set process initiate communication further only when
it receives all the messages for which it is waiting for.
Deadlock handling strategies
1. Prevention
2. Avoidance ■
3. Detection
1. Prevention: Process begins its execution only when required resources is
available and if the resource are not preempted before execution begins.
2. Avoidance: Resources are allocated only if resultant global system state is
safe. Each site maintain its local state that require storage in form of memory
which is an overhead.
3. Detection: It can be devided into 3 parts:
(a) Centralized D.D algorithm Advantage: RAG and WAG is easy
(b) Distributed D.D algorithm Advantage: Not susceptible to single point failure.
(r) Hierachial D.D algorithm: This is suitable for both centralized and distributed
D.D

Q16. Write short notes on following:


(/) Wait for graph
(//) Atomic commit in distributed database systems.
Ans. (/) Wait for graph: A simple way to detect a state of 'deadlock’ is for the
system to construct and maintain a ‘wait for graph’.
• Processes are represented as nodes and an edge from process Pt to Pj implies
P} is holding a resource that Pt needs and thus Pt is waiting for Pj to release
its lock on the resource.
• A ‘deadlock’ occurs if the graph contains any cycles.
* A wait for graph scheme is applicable to a resource allocation system with
“multiple instances” of each resource type.
Example:
(it) Atomic commit in distributed database systems: Atomic commit in
database system fulfil two of they key properties of ACID, (Automaticity and
Consistency). Consistency is only achieved if each change in the atomic commits is
consistent
• The two-phase commit protocol (2PC) is a type of an ‘atomic commitment’
protocol.
* The two phase commit protocol requires a co-ordinator to maintain all the
information needed to recover the original state of the database if something
goes wrong. As the name indicates. These are two phase voting and commit.
(a) During the ‘Voting’ phase each node writes the changes in the atomic
commit to its own disk. The nodes then report their status to the co-
ordinator.
(b) During the ‘commit’ phase the co-ordinator sends a commit message to
each of the nodes to records in their individual logs. If any of the nodes
reported, failure the co-ordinator will instead send a rollback message.
Q17. Explain Lamport - Sbostak - Pease algorithm (Oral Message Algorithm)
for 3 m + 1 or more processors where m is the number of faulty processors.
Ans: Valid for oral messages
• No solution for processors < 3 M + I Assumptions: Al: Every message is
delivered correctly.
A2: Receiver knows the sender A3: Failure can be detected
Majority rule:l. Choose the majority value, if exist else •Retreat’.
2. If from an ordered set, choose the ‘Median’.
Oral Messages:
AlgoOM(O)
• Commander send his value to every lieutenant.
• Each lieutenant (L) use the value received from commander, or
RETREAT if no value is received. AlgoOM(M),M>0
• Commander send his value to every lieutenant (v(.)
• Each lieutenant acts as commander for OM (m - I) and sends v. to the
other n-2 lieutenant (or RETREAT)
• For each and eachj<> j, let v be the value lieutenant i receives from
lieutenant j in step (2) using OM (/»-!)• '
• Lieutenant i uses the value majority (K,..., Vn |(.
• Why j <> H “True myself more than what others said I said.”
Assures:
• Processors cannot interfere with communication as 3rd party.
• Can’t send-take messages.
• Can’t interfere by being silent.

Q18. Explain following with an example:


1.Remote object reference ,
2. Remote interface
Ans:
Remote object reference: Other object can invoke the methods of a remote object if
they have access to its remote object reference. Example, A remote object reference
for B must be available to A
as shown in figure.

(B) Remote interface: Every remote object has a remote interface that specifies
which of its methods
can be involved remotely

Q19. What are the public and private keys? List the key differences and issue
in public keys cryptography and private key cartography?
Ans: Public and Private Keys: In cryptographic system, there isauseoftwo key
pairs-Public key and private key. '
• A public key is known to everyone.
• A private key or key is known only to the recipient of the message.
Public Key Cryptography
? Ppbjic key encryption uses a pair of mathematically related keys. A message
that is encrypted with the first key must be decrypted with the second key,
and a message that is encrypted with the second key must be decrypted with
the first key.
• Each participant in a public-key system has a pair of keys. The symmetric
(private) key is kept secret. The other key is distributed to anyone who wants
it, this key is public key.
To send an encrypted massage to you, the sender encrypts the message by using
your public key. When you receive the message you decrypt it by using your
symmetric key to send a message to someone, you encrypt the message by using
the recipients public key. The message can be decrypted with the recipients
symmetric key only.
Private key encryption:
• Private key encryption systems use a single key that is shared between the
sender and the receiver. Both must have the key, the sender encrypts the
message by using the key, and the receiver decrypts the message with the
same key. Both must keep the key private to keep their communication
private.
Characteristics:
• Private key encryption requires a key for every pair of individuals who need
to communicate privately. The necessary number of keys rises
dramatically as the number of participant increases.
The fact that keys must be shared between pairs of communicators means that the
keys must somehow be distributed to the participants

Q20. Explain in detail about Architecture of distributed Event Notification


Ans. Distributed event based system have two main characteristics
Hetrogeneaus: When event notification are used as a means of
communication between distributed objects.
Asynchronous: Notification are sent Asynchronausly by event generating objects to
all the objects that have subscribed to them to prevent publishers needing to
synchronize with subscribers
Different participating
objects are - I. The
object of interest
2 Event
3. Notification
4. Subscriber
5. Observer objects
6. Publisher
(//) Remote procedure call
Ans. When a process on machine A calls a procedure located m/c" B, the
calling process on A suspended and execution of the called procedure takes
place on B. Information can be transported from the caller to the callee in
the parameters and can come back in the procedure result.
Q21. Compare and contrast the methods of concurrency control for
transactions. Explain the methods for concurrency control in distributed
transactions.

Ans. • Time stamp method is similar to two-phase locking in that both use
pessimistic approaches in which conflicts between transactions are detected
as each object is accessed.
• Time stamp ordering decide serialization order statically when transaction
sort on the other hand two-phase locking decide the serialization order
dynamically.
• T ime stamp ordering is better then strict two phase locking for read only
transaction.
• Two-phase locking is better when the operations in transactions are
predominantly updated.
• Timestamp ordering aborts the transaction immediately, whereas locking
makes the transaction wait.
• Time stamp ordering is deadlock free.
• When optimistic concurrency control is used all transactions when they
allowed to commit or in forward validation transaction are aborted earlier.
I his results in relatively efficient operation where there are few conflicts, but a
substantial amount of work may have to be repeated when a transaction is aborted
Methods for concurrency control is
I. lacking: In distributed transaction the locks
on an object are held locally, when locking is used for concurrency control the
objects remain lock'd and are unavailable for another transaction during the
atomic commit protocol. Consider following interleaving of transaction T and
U at servers X and Y.
T t

Write <A) at X Write (h) at Y locks


locks A Read (B) B Read (A) at X wait

2. Time stamp ordering: In distributed transactions, we require that each co-


ordinator issue globally unique time stamp. A globally unique time stamp is
issued to the client by the first coordinator accessed by a transaction.
3. Optimistic concurrency control: A distributed transaction is validated by a
collection of independent servers, each of which validates transactions that
access its own objects.
Q22. What do you mean by two phase Locking? How it is different from strict
two phase Locking? Explain.
Ans. A transaction is said to follow the two-phase locking if all locking operation
(Read-lock, write-lock) proceed the first unlock operation in the transaction. Such
a transaction can be divided into two phases:
(a) Expanding or growing (first phase), during which new locks on items can be
acquired but none can be released.
(b) Shrinking (second) Phase: During which existing locks can be released but
no new locks can be acquired.
If lock conversion is allowed, then upgrading of locks must be done during
expanding phase and downgrading of locks must be done iir the shrinking phase.
For example: Transaction T,' and T2' which are the same but follow the 2-phase
locking they can produce a deadlock.

Strict two phase locking: is the most popular version of two phase locking
which guarantees strict schedules. In this variation, a transaction T does not
release any of its exclusive (write) lock and until after it commits or aborts.
Hence, no others transaction can read or write an item that is written by T
unless T has committed, leading to a strict schedule for recoverability.
Strict two phase locking: is not deadlock-free. A more restrictive version is rigorous
two phase locking
Q23. Explain the following:
(a) Fault tolerant services
(b) Highly available services.

Ans. (a) Fault tolerant using active replication


services: In an active model of replication of fault tolerance, the replica managers
are state machines that play equivalent roles and are organized as a group. Front
ends muiticart their request to the group of replica managers and all the replica
managers process the request independently but identically and reply request on
operation to be performed as follows:
1. Request: The front end attaches a unique identifier to the request.
2. Co-ordination: The every correct replica manager in the same order.
3. Execution: Every replica manager executes the request.
4. Agreement: No agreement phase is needed.
5. Response: Sends response to the front end.

(b) Highly available services. Ans. Our emphasis now is on giving clients access to
the service-with reasonable response times-for as much of the time as possible,
even if some results do not conform to sequential consistency. For example, user
on the train at the beginning may be willing to core with temporary inconsistencies
between copies of data such as diaries they can continue to work while
disconnected and fix any problem later. Example, Gassip architecture for
implementing highly available services.
Q24. Explain the term “routing”. How routing problem .can be classified? Also
discuss the criterion for good routing algorithms.

Ans. A node in a computer network in general not connected directly to every


other process by channel. A node can send packets of information directly only to
a subset of the nodes called the neighbours of the node. ,
Routing “is the term used to describe the decision procedure by which a node
selects one (or sometimes more) of its neighbours to forward a packet on its way to
an ultimate destination.”
Classification: Two types of routing algorithms:
(a) Non-adaptive Routing Algorithms.
(h) Adaptive Routing Algorithms.
(c) Hierarchial Routing is used to make there Algorithms scale to large networks.
1. Non Adaptive:
Example: I. Floding routing
2. Shortest path routing (Dijskrtra’s shortest path Algorithm)
Criteria for good Routing Algorithm
1. Correctness: The Algorithm must deliver every packet offered to the network
to its ultimate destination.
2. Efficiency: The algo, must bend packets through “good” paths, an algo, is
called optimal if it uses the “best” paths.
3. Complexity: The algo, for the computation of the tables must use as few
messages, time and storage as possible.
4. Robustness: In case of a topological change the Algorithm updates the
routing tables in order to perform the routing function in the modified
network.
5. Adaptiveness: The algorithm balances the load of channels and nodes by
adapting the tables in order to avoid paths through channel.
6. Fairness: The algo, must provide service to every user in the same degree.

Q25. Write short notes on following:


(a) CORBA services (b) Deadlock free packet switching
Ans. (/) CORBA service:
1. The naming service: The COBRA naming service is a sophisticated example
of the binder it allows names to be bound to the remote object reference of COBRA
objects with in naming contexts.
2. COBRA event service: The COBRA event service specification defines
interfaces allowing objects of interest. Called suppliers to communicate as
arguments or results of ordinary synchronous COBRA remote method invocation.
3. COBRA security service: It includes the following-
• Authentication of principles (users and servers)
• Access control to be applied on the COBRA
• Facilities of non-repudiation.
4. Trading service: In contrast to the naming service which allows COBRA
objects to be located
by name the trading service allows them to be located by name attribute, it is a
directory service.
5. Transaction services and concurrent consis services: It allows
distributed COBRA objects to
; participate in either flat or nested transaction.
6. Persistent object-services: The architecture of the POS allows for a
set of data stores to be available-each persistent object has a persistent identifies.
Deadlock free packet switching: .If Message (packets) travelling through a packet
switched communication network must be stored at each node before being
forwarded to the next node on the path to their destination.
• Each node of the network reserves some buffer space for this purpose.
As the amount of buffer space is finite in each
• node, situation may occur where no packet can be forwarded because alt
buffers in the next node are occupied.
There are two kinds of methods of avoiding store and forward deadlocks:
I. Structured buffer pool: Will identify for a node and a packet a specific
buffer that must be taken if a packet is generated or received.
2. Unstructured buffer pool: Does not determine in which buffer it must be
placed.

You might also like