You are on page 1of 18

CHAPTER 2.

3
Deadlock:
Definition

Four Necessary
conditions for a
DEADLOCK to occur

Deadlock solution
Traffic Deadlock
Deadlock
Deadlock refers to a specific condition when
two or more processes are each waiting for
each other to release a resource.

Or more than two processes are waiting for
resources in a circular chain.

Deadlock is a common problem in
multiprocessing where many processes share
a specific type of mutually exclusive resource
known as a software lock or soft lock.

Deadlocks are particularly troubling
because there is no general solution to
avoid (soft) deadlocks.

This situation may be likened to two
people who are drawing diagrams, with
only one pencil and one ruler between
them. If one person takes the pencil and
the other takes the ruler, a deadlock
occurs when the person with the pencil
needs the ruler and the person with the
ruler needs the pencil to finish his work
with the ruler. Both requests can't be
satisfied, so a deadlock occurs.

DEADLOCKS
EXAMPLES:

"It takes money to make money".

You can't get a job without experience; you
can't get experience without a job.

5
BACKGROUND:

The cause of deadlocks: Each process needing
what another process has. This results from
sharing resources such as memory, devices,
links.

Under normal operation, a resource allocations
proceed like this::

1. Request a resource (delay until available if necessary
).
2. Use the resource.
3. Release the resource.
6
Traffic only in one direction.
Each section of a bridge can be viewed as a
resource.
If a deadlock occurs, it can be determine if one
car backs up (block resources and rollback).
Several cars may have to be backed up if a
deadlock occurs.

7
Bridge Crossing
Example
DEADLOCK CHARACTERISTICS
8
NECESSARY CONDITIONS
ALL of these four must happen
simultaneously for a deadlock to occur:

9
1. Mutual exclusion
A resource that cannot be used by more
than one process at a time

2. Hold and Wait
Processes already holding resources may
request new resources

10

3. No Preemption
No resource can be by force removed from a
process holding it, resources can be released
only by the clear action of the process

4. Circular Wait
Two or more processes form a circular chain
where each process waits for a resource that
the next process in the chain holds
(Process A waits for Process B waits for Process
C .... waits for Process A.)
METHODS FOR
HANDLING
DEADLOCKS
11
Ensures that threads competing for a shared resource do not
have their execution indefinitely postponed by mutual
exclusion. A non-blocking algorithm is lock-free if there is
guaranteed system-wide progress; wait-free if there is also
guaranteed per-thread progress

They are designed to avoid requiring a critical section. Often,
these algorithms allow multiple processes to make progress
on a problem without ever blocking each other. For some
operations, these algorithms provide an alternative to locking
mechanisms.

Non-blocking synchronization algorithm
Figure : blocking io without multiplexing
While blocking algorithms that employ the traditional (single-burst)
approach can achieve better performance for really fast clients, they
don't scale well. On the diagram below, clients requested 6 downloads
at approximately the same time, but because the server uses blocking
io without multiplexing (SimpleBlockingAlgorithm) and has only 3
worker threads available to handle these downloads, 3 of the
downloads are being put to the queue and won't start until there is a
free thread that could handle them.
Example Blocking vs Non-blocking
Figure : non-blocking io with multiplexing
On the second diagram, the server is using non-blocking io with
multiplexing (EqualNonBlockingAlgorithm), which results in slightly
longer average download time, but the 3 threads can handle all 6
downloads concurrently, which results in a much better throughput. This
is most likely because the threads are utilized whenever any of the
clients is able to receive more data.
Example Blocking vs Non-blocking
Serializing tokens
Serializing tokens allow programmers to write
multiprocessor-safe code without themselves or the lower
level subsystems needing to be aware of every single
entity that may also be holding the same token.
Wait-freedom is the strongest non-blocking guarantee of
progress, combining guaranteed system-wide throughput
with starvation-freedom. An algorithm is wait-free if every
operation has a bound on the number of steps the
algorithm will take before the operation completes.

Lock-freedom allows individual threads to starve but
guarantees system-wide throughput. An algorithm is lock-
free if it satisfies that when the program threads are run
sufficiently long at least one of the threads makes progress
(for some sensible definition of progress). All wait-free
algorithms are lock-free.

Lock- free and wait-free algorithm
Assigning a partial order to the resources and establishing the
convention that all resources will be requested in order, and
released in reverse order, and that no two resources unrelated
by order will ever be used by a single unit of work at the same
time.
Resource hierarchy solution
END

You might also like