You are on page 1of 18

THE EDGE IS NOT THE END

WHY CLOUD AND MOBILE MAKE


CDNS OBSOLETE
BY PADDY GANTI

Recently Shane Lowry, our VP of Engineering, wrote a blog post on how the next
disruption in application delivery is about eliminating human middleware.
I wanted to provide some more context and also share some data nuggets to expand
on the facts laid out in that article.
Its no surprise that mobile adoption and the advent of cloud computing are the two
biggest disruptions we have seen in the Internet service delivery space. In this post,
we consider the implications for both client and server side given these disruptions.
We will also show that content sizes are increasing, device diversity is exploding and
the new choke point for application delivery is now the Radio Resource Controller
(RRC) and Radio Access Network (RAN). These challenges dictate a solution space
that's different from the previous approaches we have seen and Instart Logic is
specifically focused here.
First, lets start by talking about the two key disruptions mobile and its impact on the
client/device front; and cloud-based computing and what that means for the back end.

MOBILE
Globally, mobile traffic is about 30% of all Internet
activity today and is increasing rapidly, with an
additional 6% of activity generated from tablets.
The Cisco Visual Networking Index (VNI)
provides the following quantitative estimates in
mobile data growth, which shows that we expect
to see an 18x increase in 5 years (2011-2016).

Mobile Data Growth in Exabytes Per


20

15

10

0
2012

2013

2014

2015

ExaBytes per Month

2016

Device Model

GT-I9100

This growth is fueled by demand for better applications


and more content (mostly video) from a variety of
mobile devices. The reason that this growth differs from
what weve seen historically is that previously desktops
primarily consisted of Wintel-based platforms using
wired line access to the Internet which made it easy
to optimize for a homogeneous workload. Todays
plethora of smart phones and tablets makes it an
entirely different ballgame.
While it's tempting to bundle all mobile growth into
a single bucket, in reality the demand for content
emanates from a wide variety of devices. The variety
starts with platforms. Lets consider the following
treemap of Android devices that are out there (Android
owns 72% of the market, while iOS accounts for 26%).

From our own logs, we see the following distribution


of device platforms:

Android 4.4
Android 4.2
Android 4.1
iOs 7.1
iOs 8.0
Other Android
Other iOs
Miscellaneous

To add to that, we also need


to consider the screen size
diversity, which ranges from
320x480 pixels (smart phones)
all the way up to 1920x1080
pixels (HD displays).
The bottom line is that mobile
data is growing exponentially
and is being consumed by a
greater variety of screens and
device platforms.

CLOUD SERVICE
ADOPTION
While the client side is exploding, on the
server side we see a trend towards cloud
adoption. The manifestation of cloud
computing for web pages is the presence
of a lot of third party components such as
widgets doing A/B testing, providing feedback via beacons, and tracking user behavior
apart from providing analytics. This increases
the number of components on a given web
page while not exactly contributing much
to the overall payload. We saw that roughly
48% of the requests in http archive are
classified as third-party.

Interest over time

2005

2007

2009

2011

2013

2015

With the explosion of mobile devices and consolidation of cloud


services, and the perennial expectation that compute and network
just keep getting better and faster, the logical conclusion is that this
must mean that the mobile web is faster. But the reality is quite
different.

THE MOBILE WEB IS IN FACT


GETTING SLOWER OVER
TIME
When we say faster, we mean visually/perceptually faster.
So the question then boils down to what metric should we
choose that best correlates with visual perception of a page
load? OnLoad isn't a good metric, since a page load event
can be artificially triggered by sites even when no visual
content is present, and neither is Start Render, which can
be triggered after onLoad. So we finally settled on Speed
Index, which is a WebPagetest measurement of how quickly
the screen paints (perceived load time). The faster you paint
the whole screen, the lower the score. A Speed Index of
less than a second is the holy grail in web performance.

Top 1K URLs speedIndex


10000

8000

6000

4000
2000
7/1/2012 1/1/2013 7/1/2013 1/1/2014 7/1/2014

Top 10K URLs speedIndex

Top 100K URLs speedIndex

12000

12000

9000

9000

6000

6000

3000

3000

0
7/1/20121/1/20137/1/20131/1/20147/1/2014

7/1/20121/1/20137/1/20131/1/20147/1/2014

We tracked Speed Index for the top


1,000, top 10,000, and top 100,000
sites as cohorts to check if any
apparent trend is uniform, or if it
differs over the various groupings.
From what we can see, it's a
uniform trend that mobile websites
over the last 2 years are getting
slower not faster, despite all the
advances that have been made.

(Note: the collection of data changed a bit in the middle, when the
throughput of the mobile device measurement was altered to use an
emulated 3G network in June 2013. However these changes do not
affect our conclusion in any meaningful way.)
So why is the Mobile Web getting slower?

CONTENT IS
GETTING
FATTER

The first fairly obvious reason is the growth in richer and more
content-intensive web sites. To substantiate this claim, we took
a look at the Page weight metric.

Top 1K URLs PageWeight

Top 10K URLs Pageweight

Top 100K URLs PageWeight

600000

600000

600000

500000

500000

500000

400000

400000

400000

300000

300000

300000

200000

200000

200000

7/1/2012 1/1/2013 7/1/2013 1/1/2014 7/1/2014

Median_byteVolume

7/1/2012 1/1/2013 7/1/2013 1/1/2014 7/1/2014

Median_byteVolume

7/1/2012 1/1/2013 7/1/2013 1/1/2014 7/1/2014

Median_byteVolume

As you can see, the uniform trend across all cohorts is a marked
increase in page bytes.
Next we wanted to see if we could pin this increase to particular
types of web traffic, so we separated out the Page weight data
by content types:
Top 1K Size by Content Type Evolution Over Time
22500
20000
17500
15000
12500
1200000
1000000
800000
600000
18000
14000
12000
10000

Median_HtmlBytes

Median_JsBytes

Median_CssBytes

Top 10K Size by Content Type Evolution Over Time


22500
20000
17500
15000
12500
1500000
1250000
1000000
750000
18000
14000
12000
10000

250000

250000

200000

200000

150000

Median_ImgBytes

150000

Top 100K Size by Content Type Evolution Over Time


22500
20000
17500
15000
12500
1500000
1250000
1000000
750000
18000
14000
12000
10000

Median_HtmlBytes

Median_JsBytes

Median_CssBytes

250000

Median_HtmlBytes

Median_JsBytes

Median_CssBytes

Median_ImgBytes

200000
150000

Median_ImgBytes

Again the uniform trend shows that


content sizes are bloating across all
content types, ranging from a few
percent in HTML to a near-doubling
of Image bytes.

NETWORK LATENCY
OF THE ACCESS MEDIUM
Page load
Time (ms)

Page Load Time as bandwidth increases

3500
3000
2500
2000
1500
1000

Page load
Time (ms)

A quantitative study performed by Mike Belshe (one of the creators


of the SPDY protocol) on the impact of varying bandwidth vs. latency
on page load times for some of the most popular destinations on the
Web showed the following:

1 Mbps

2 Mbps

3 Mbps

4 Mbps

5 Mbps

6 Mbps

7 Mbps

8 Mbps

9 Mbps

10 Mbps

40 ms

20 ms

Page Load Time as latency decreases

3500
3000
2500
2000
1500
1000

200 ms

180 ms

160 ms

140 ms

120 ms

100 ms

80 ms

60 ms

Looking at this graph,


one would question
any provider touting
bandwidth increase
as a panacea for web
page performance.

So we ask, what is the trend in RTTs across


the world? Let's consult an active measurement
database maintained by Les Cotrell to see what
the trend is there.

Average RTT over Time


Avg rtt in ms to rest
of the world

As you can see from the data above, if users


double their bandwidth without reducing their RTT
significantly, the effect on Web Browsing will be a
minimal improvement. However, decreasing RTT,
regardless of current bandwidth always helps make
web browsing faster. To speed up the Internet at
large, we should look for more ways to bring down
RTT. What if we could reduce cross-atlantic RTTs
from 150ms to 100ms? This would have a larger
effect on the speed of the internet than increasing
a users bandwidth from 3.9Mbps to 10Mbps or
even 1Gbps. Mike Belshe

5000

South Asia
S.E. Asia
Russia

3750

Oceania
North America
Middle East

2500

Latin America
Europe
East Asia

1250

Central Asia
Balkans
Africa

0
1/1/2011

1/1/2012

7/1/2011

1/1/2013

7/1/2012

1/1/2014

7/1/2013

Date/time of measurement

7/1/2014

As you can see, in the last couple of years there has been a small improvement
in RTTs, but by and large nothing meaningful.
Since the majority of e-commerce and hosting providers happen to be in the US,
let's look for FCC reports on latencies across DSL, Cable and Fiber.
DSL

Cable

Fiber

Average Latency
( Milliseconds )

70
60
50
40
30
20
10
0
1 Mbps

3 Mbps

6 Mbps 10 Mbps 15 Mbps 20 Mbps 24 Mbps 30 Mbps 40 Mbps 75 Mbps

Advertised Speed ( Mbit/s )

In 2014, fiber-to-the-home
services provided 24 ms roundtrip latency on average, while
cable-based services averaged
31 ms, and DSL-based services
averaged 48 ms. Compare this
to 2013, where fiber-to-the-home
services provided 18 ms roundtrip latency on average, while
cable-based services averaged
26 ms, and DSL-based services
averaged 43 ms.

Overall latency is not getting any better if anything, it's getting worse. The average RTT to Google
is pretty much the same as it was in 2010, despite all the innovations brought to us by this awesome
company. An alternate study by M-Lab stresses this point of degradation in latency due to interconnections
between providers.
So far all the above data is desktop alone, so let's focus on latency numbers from AT&T:
LTE
AT&T core
network
latency

40-50 ms

HSPA +
50-200 ms

HSPA

EDGE

GPRS

150-400 ms

600-750 ms

600-750 ms

To put those latencies in context, also consider the bandwidth available by technology:
Generation

Data rate

2G

100-400 Kbit/s

3G

0.5-5 Mbit/s

4G

1-50 Mbit/s

Since we are talking about mobile data, lets see the overall path
a packet has to traverse to get service over the internet:

Technology
2G / 3G / 4G
/ Wi-Fi

Sectors
# of Directions
Per Cell Site

Radio Access

Sectors
# of Directions
Per Cell Site

Wireline Internet
Backbone

Backhaul

Core Network

Carriers
# of Spectrums
In Use

As you can see, its the confluence of a lot of technologies that helps
bring information to your fingertips.

While the middle mile was the bottleneck in the


desktop world, in the mobile world the Radio
Access Network (RAN) is the new bottleneck for
mobile browsing. More specifically, let's take
a look at the capacity of a typical cell tower:

Major Market Cell Tower = 21.6 Mbps Capacity


Each Sector
Has 2 Carriers
Each Carrier
Has 3.6 Mbps
of Capacity

Typically these towers are provisioned and operate


at 75% utilization, which means we have only
16.2Mbps to use. The average voice call takes 12Kbps,
which means a maximum of 1350 calls are supported
before degrading. Add the average fat webpage to this
mix
and you are looking at a maximum of 8 webpages
holding the tower at capacity. This is the new bottleneck
in the whole mobile user experience, and there is not
much a user or content publisher can do about this
except send the most important bits of the application
in the first few packets.

CONCLUSIONS
So weve talked about a lot of different elements in this article. To summarize, we saw that web content is getting
richer while device diversity is exploding, and that we cannot pin our hopes on faster lanes, given that network
access times have been stagnant for over a decade (and will likely continue to be so in the near future). All these
forces combine together to create a new pressure point on RAN congestion, which is already at capacity.
While I have mostly dwelt on the problems in this post, the solution space for mobile web applications is to

make things smaller (without losing quality of experience)


move them closer to the user (I mean in the browser, not some server in the cloud given the RTT)
cache them as long as we can (existing solutions do not)
loading the application resources intelligently (loading the most significant resources first)

Sounds easy enough, yet it requires a very different approach to application delivery one that we at Instart
Logic, with our Software-Defined Application Delivery platform, are focused on.

REFERENCES

HTTP Archive
Cisco VNI
Ilya Grigorik's blog
Android Fragmentation
Why Mobile Apps are Slow
More Bandwidth Doesn't Matter Much
M-Lab Interconnection Study
FCC Broadband America
Netflix ISP Speed Index
PingER Project
High Performance Browser Networking
Bessemer Cloud

You might also like