You are on page 1of 4

Q1.

A) The end to end delay for sending one packet is: N*L/R
Therefore the end to end delay for P such packets will be: P * (delay for
one packet)
i.e. P * N * L / R

B) The question says that all packets reach simultaneously to a certain


link which is currently free.
So the first packet will experience no delay i.e. 0s
2nd packet will have to wait until the first is transmitted which will take
L/R s
3rd will have to wait until the first two are transmitted so it will have to
wait for 2 L/R s
.
.
Similarly nth packet will have to wait until n-1 packets are transmitted
which will take (n-1) * L/R s.
If we sum them all we get an equation as follows:
0 + L/R + 2L/R + + (n-1) * L/R
This simplifies to L/R (0 + 1 + 2 + . . . + n-1)
n1

Or more simply L / R *

i
i=0

If we divide this sum by N i.e. total no of packets we will get the


n1

average queuing delay i.e. (L / R *

Q2.

i
i=0

)/N

A) With store and forward packet switching the message will reach the
destination from source in 3 hops.
The time for message to hop one step is (message size)/ (transmission
rate) = 9*10^6/3*10^6 = 3 seconds.
Since message requires 3 hops total time will be 3*3 = 9 seconds.
B) Time required for the first packet to reach first switch = 9*10^3 /
3*10^6 = 3*10^-3 s = 3 ms
When the first packet moves from 1st switch to 2nd switch 2nd packet
moves in parallel from source to fist switch so it will reach the first
switch when packet 1 reaches 2nd switch i.e. at T = 6 ms.
C) Time for fist packet to reach destination was 9 ms after that every
subsequent packet reaches the destination every 3ms. So total time for
1000 packets to reach destination will be: 9ms + 999* 3ms = 3.006
seconds.
It is clear that with message segmentation much less time is required
to transfer the message. In this case it required almost one-third the
time in A (since three hops were used)
D) 1. If any error occurs at certain point in time the whole message does
not need to be re-transmitted only the erroneous packets need to be
re-transmitted.
2. Time division multiplexing can be used to send messages of multiple
users in parallel.
E) 1. Requires overhead of message segmentation at source and message
re-assembling at destination.
2. At destination the packets need to be put in a sequence.
3. Since there are multiple packets with segmentation there are
multiple headers which increase the overall size of the message more
than the without-message-segmentation technique in which the
header is added only once.
Q3.
A) The transmission time for transmitting an object = L/R. And the
average time can be calculated by dividing average size of the object
by R:
Therefore = (1000000 bits) / (1000000 bits/sec)=1 sec.
The traffic intensity arriving at the link is equal to:
=1 * 2 = 2

We know that when the traffic intensity ( or La/R) becomes > 1


then average queue delay becomes infinite since there is more work
that is arriving than can be serviced. So average response time will
also be infinite and users of institution will not be getting their
networking needs fulfilled.
B) Since the caches miss rate is 0.6 the traffic intensity will decrease by
60% since 60% of the requests are missed by cache. Thus the
average access delay is:
(1 sec) / (10.4*1) = 2.5 sec.
The average response time will be equal to (average access delay) +
(average internet delay):
= 2.5 sec + 3 sec = 5.5 sec
Q4.
Let Tp rep=resent the one-way propagation delay from the client to the
server.
Lets consider parallel downloads first assuming non-persistent connections.
Parallel download would allow 20 connections (for referenced objects) to
divide the 250 bits/sec of bandwidth, meaning each connection will get 12.5
bits/sec.
Hence, the time required to get all the referenced objects will be:
(400/250 +Tp + 400/250 +Tp + 400/250 +Tp + 200,000/250+ Tp )
+ (400/(250/20)+Tp + 400/(250/20) +Tp + 400/(250/20)+Tp + 200,000/
(250/20)+ Tp )
= 804.8 + 16096 + 8*Tp (seconds)
=16900.8 + 8 * Tp seconds.
Next let us consider persistent HTTP connection. The total time required will
be:
(400/250 +Tp) * 3 + 200000/250 + Tp + 20 * (400/250+Tp +
200000/250+Tp)
= 4.8 + 3Tp + 800 + Tp + 20 * (1.6 + Tp + 800 + Tp)
=804.8 + 4Tp + 20 * (801.6 + 2 Tp)
= 16836.8 + 44 Tp
Taking the speed of light as 3*108 m/sec, then Tp=20/(3*108)=0.067
microsec. Meaning that Tp is very insignificant compared with transmission
delay.

Thus, we can say that the persistent HTTP does not have substantial gain
over the non-persistent HTTP with parallel download.
Q5.
a) Yes, the parallel connections will help Bob get web pages quicker because
Bob has more connections, therefore his share of the link bandwidth will
be greater hence he will get web pages quicker than others.

b) Yes it will still be beneficial as he will still be getting parallel download


however if he stops parallel download then he will get less bandwidth
than the other users.

You might also like