You are on page 1of 102

This research note is restricted to the personal use of yolanda.robles@inegi.org.

mx
G00260996

Hype Cycle for the Telecommunications


Industry, 2014
Published: 4 August 2014

Analyst(s): Kamlesh Bhatia

CSPs' future existence will depend on their ability to deliver individual


experiences over an industrial-scale infrastructure. This Hype Cycle
examines the key systems, processes and platforms that will help CSPs tide
over their competitors and remain relevant to their consumers.
Table of Contents
Analysis.................................................................................................................................................. 3
What You Need to Know.................................................................................................................. 3
The Hype Cycle................................................................................................................................ 4
The Priority Matrix.............................................................................................................................7
Off the Hype Cycle........................................................................................................................... 8
On the Rise...................................................................................................................................... 9
Cognizant Computing.................................................................................................................9
IoT Platform.............................................................................................................................. 12
Open-Source Telecom Operations Management Systems........................................................13
5G............................................................................................................................................ 15
DevOps.................................................................................................................................... 17
At the Peak.....................................................................................................................................19
In-Memory Computing..............................................................................................................19
Subscription Billing................................................................................................................... 22
Business Capability Modeling................................................................................................... 23
Managed Mobility Services....................................................................................................... 25
Open APIs in CSPs' Infrastructure............................................................................................ 27
IT/OT Integration.......................................................................................................................29
Communications Service Providers as Cloud Service Brokerages.............................................30
Heterogeneous Networks......................................................................................................... 33
Hybrid Mobile Development......................................................................................................35

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Social Network Analysis............................................................................................................36


Capacity-Planning and Management Tools...............................................................................38
Context-Enriched Services....................................................................................................... 40
CSP Network Intelligence......................................................................................................... 42
Sliding Into the Trough....................................................................................................................43
4G Standard.............................................................................................................................43
Big Data................................................................................................................................... 45
Network Function Virtualization................................................................................................. 47
Mobile Self-Organizing Networks.............................................................................................. 49
Voice Over LTE.........................................................................................................................51
Cloud-Based RAN.................................................................................................................... 53
Hybrid Cloud Computing.......................................................................................................... 54
Software-Defined Networks...................................................................................................... 56
Innovation Management........................................................................................................... 58
Mobile QoS for LTE.................................................................................................................. 60
Mobile Unified Communications................................................................................................62
Telecom Analytics.....................................................................................................................64
Real-Time Infrastructure............................................................................................................66
Machine-to-Machine Communication Services......................................................................... 68
Next-Generation Service Delivery Platforms.............................................................................. 71
Retail Mobile Payments............................................................................................................ 74
Mobile Subscriber Data Management....................................................................................... 76
Service-Oriented Architecture in OSS/BSS and SDP................................................................ 78
Location-Based Advertising/Location-Based Marketing............................................................ 80
Climbing the Slope......................................................................................................................... 83
Content Integration................................................................................................................... 83
Infrastructure as a Service (IaaS)............................................................................................... 84
Mobile CDN..............................................................................................................................86
Rich Communication Suite........................................................................................................88
Mobile Advertising.................................................................................................................... 90
Enterprise Architecture............................................................................................................. 92
IP Service Assurance................................................................................................................ 94
Entering the Plateau....................................................................................................................... 96
Mobile DPI................................................................................................................................96
Appendixes.................................................................................................................................... 97
Hype Cycle Phases, Benefit Ratings and Maturity Levels.......................................................... 99
Page 2 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Gartner Recommended Reading........................................................................................................ 100

List of Tables
Table 1. Hype Cycle Phases.................................................................................................................99
Table 2. Benefit Ratings........................................................................................................................99
Table 3. Maturity Levels......................................................................................................................100

List of Figures
Figure 1. Hype Cycle for the Telecommunications Industry, 2014........................................................... 6
Figure 2. Priority Matrix for the Telecommunications Industry, 2014........................................................8
Figure 3. Hype Cycle for the Telecommunications Industry, 2013......................................................... 98

Analysis
What You Need to Know
Communication service providers (CSPs) find themselves at the crossroads of being efficient
connectivity providers and enabling new revenue opportunities through innovation and
collaboration. The change is largely driven by users gravitating toward convergence, content and a
better user experience, often sourced as digital services from Internet-based or over-the-top (OTT)
providers.
The migration to digital services is a fundamental shift and is having a transformational impact on
CSP business and operating models. CSPs must prepare to face this challenge at all levels, starting
with a future view of technology and influences on market developments.
Use of digital technology among individual and enterprise consumers is becoming seamless, driving
greater expectations in terms of customer experience and servicing client needs in real time.
Access to new forms of information, in conjunction with the technology (devices, social platforms
and so on) influence customers' buying behavior both their own and that of others through
network and social media interactions.
Digitization is also starting to blur traditional organizational boundaries between CSP IT and the
network to create internal struggles around the investment road map, ownership and skills. To
achieve expected business outcomes, CSPs must create a more responsive organization powered
by greater use of data, analytics, platforms and simplified user and partner interface. This will help
underpin the CSP role as an enabler for new growth areas like machine to machine (M2M) and the
Internet of Things (IoT) that demand seamless integration of network and IT in a business context.

Page 3 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

The broader focus on the 2014 Hype Cycle is to highlight technologies that contribute to trends that
Gartner is observing in the market (see "Market Trends: Worldwide, Top Five Disruptive Trends for
CSPs, 2014-2019").
This Hype Cycle is primarily intended for CSP CTOs that want to assess emerging technologies,
their relative maturity, and market adoption for use in solutions aimed at improving the end-user
experience, competitive differentiation, and monetizing new opportunities. The technology profiles
in this Hype Cycle may also interest CIOs, CMOs and CFOs in CSP organizations interested in
evaluating their implications across the organization.

The Hype Cycle


This year's Hype Cycle includes technologies that will enable CSPs to become providers of
experience-led innovation and advanced technology solutions. The technologies included are key
topics of discussion in the CSP arena and form the "building blocks" for CSPs to create new
delivery and monetization capabilities. There have been no methodological changes to the selection
process for technologies. Additions and deletions to the technology profiles are noted below.
Digitization will impact all parts of CSP operational and business environment including
established processes for product and customer management and operations. The traditional
model of delivering capabilities (process, applications and architecture) mapped to individual
services will not be sustainable, and CSPs will have to adopt a more scalable, industrialized
approach for supporting digital services.
As CSPs grapple with the technical challenges of building their own "digital technology factory,"
they must also deal with the paradoxical trend of personalization end users demanding a
customized experience from marketing through product support. The ability to deliver an individual
experience over an industrial IT infrastructure will form the basis for CSPs' future existence.
To strike the right balance, CSPs must focus on:

Customer centricity. Retool internal processes to make them more aligned to customer needs.
Leverage customer data and use of analytics to deliver a consistent experience across channels
and services.

Product leadership. Focus on reusable technology and process assets that allow CSPs to not
only "succeed fast," but also "fail fast" and "fail inexpensively." Reduce time to market for new
ideas and leverage partners to co-innovate.

Operational excellence. Deliver seamless service in terms of coverage and quality and with the
right levels of security and compliance for the business.

Cost leadership. Improve productivity by supporting self-service, automation and business


process improvement. Leverage scalable infrastructure solutions that impact positively on
expense ratios and drive standardization.

The technology profiles in this Hype Cycle contribute toward the focused goals mentioned earlier.
They also demonstrate the influence of the Nexus of Forces (cloud, mobility, social and information)

Page 4 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

on CSPs' IT architecture, and technologies that can enable new business opportunities (see "How
the Nexus of Forces Deeply Transforms Communications Service Providers' Strategy").
The Hype Cycle has a number of technologies that are new entrants this year, starting from the
Innovation Trigger. These technologies like cognizant computing, IoT platforms and DevOps are
better-suited for Type A organizations (aggressive technology adopters). The technologies offer an
early-bird advantage to CSPs seeking opportunities in new, related spaces.
Technologies like subscription billing, open APIs and integration of IT and OT are nearing the peak
on the Hype Cycle, indicating extensive interest from the media, the vendor community, and
technology evangelists to highlight potential benefits. Some CSPs are making early investment in
these technologies as products/solutions mature and there is evidence of potential benefit.
Approaches like smart city frameworks allow CSPs to engage with local government stakeholders
and be part of new, developing ecosystems early in the planning cycle.
Technology profiles in the Trough of Disillusionment are past the peak and there is gradual uptake,
mostly by Type B and C organizations that comprise CSPs that want to adopt new technology but
are generally risk-averse or have smaller budgets. Technologies like 4G, HetNets and analytics to
tap network intelligence, M2M sevices and CSP role as a service broker are now well-understood
with some examples of successful outcomes. We expect these to see sustained investment over
next few years. Software-defined networking (SDN) and network function virtualization (NFV) will
transition from field trials to implementations as standards evolve over time.
Many technologies in Slope of Enlightenment are now considered hygiene factors for CSPs to build
effective products and operational focus. The use of service-oriented architecture (SOA) and
reusable components is at the heart of all new architectures and packaged product offerings for
CSPs. In the same way, the ability to offer location-based services and better content and
infrastructure management capability is the basis for new opportunities with enterprise clients.
This Hype Cycle complements other Hype Cycles offering technology insight into specific areas of
CSP strategy and operations, see "Hype Cycle for Communications Service Provider Infrastructure,
2014," "Hype Cycle for Communications Service Provider Operations, 2014," "Hype Cycle for
Wireless Networking Infrastructure, 2014" and "Hype Cycle for Communications Service Provider
Digital Services Enablement."

Page 5 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Figure 1. Hype Cycle for the Telecommunications Industry, 2014


Communications Service Providers as Cloud Service Brokerages
Heterogeneous Networks
Hybrid Mobile Development
Social Network Analysis
Capacity-Planning and Management Tools
Context-Enriched Services
CSP Network Intelligence

expectations
IT/OT Integration
Open APIs in CSPs' Infrastructure
Managed Mobility Services
Business Capability Modeling
Subscription Billing
In-Memory Computing

4G
Standard
Big Data

DevOps

5G
Open-Source Telecom Operations
Management Systems
IoT Platform

Cognizant Computing

Innovation
Trigger

Network Function Virtualization


Mobile Self-Organizing Networks
Voice Over LTE
Cloud-Based RAN
Hybrid Cloud Computing
Software-Defined Networks
Innovation Management
Mobile QoS for LTE
Mobile Unified Communications
Telecom Analytics
Real-Time Infrastructure

Machine-to-Machine
Communication
Services
Next-Generation Service
Delivery Platforms
Retail Mobile Payments
Mobile Subscriber Data
Management

Peak of
Inflated
Expectations

Mobile DPI

IP Service Assurance
Enterprise Architecture
Mobile Advertising
Rich Communication Suite
Mobile CDN
Infrastructure as a Service (IaaS)
Content Integration
Location-Based Advertising/Location-Based Marketing
Service-Oriented Architecture
in OSS/BSS and SDP
As of August 2014

Trough of
Disillusionment

Slope of Enlightenment

Plateau of
Productivity

time
Plateau will be reached in:
less than 2 years

2 to 5 years

5 to 10 years

more than 10 years

obsolete
before plateau

Source: Gartner (August 2014)

Page 6 of 102

Gartner, Inc. | G00260996

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

The Priority Matrix


CSPs' future success rests on their ability to formulate operational and business strategies that
allow more control in the hands of consumers to create their own experiences. This involves finding
new ways to extract more information from consumers and their service habits, becoming agile and
flexible in product development, collaborating with the broader ecosystem and being responsive to
change.
In this section, we highlight several profiles that in our view are transformational or have a high
benefit rating. These technologies and underlying processes have a strong impact on CSPs' ability
to create differentiation through a superior user experience and targeted new areas of growth.
Context-enriched services and network functions virtualization (NFV) are two profiles that Gartner
believes should be observed closely and will play out in the next three to five years. Contextenriched services allow CSPs to leverage contextual elements about the service and situation of
their customers to target new opportunities better and improve real-time response. The underlying
architectural constructs involving services and APIs allow context enrichment of business
applications, platforms and support systems, allowing CSPs to support advanced use-cases for
marketing and customer-facing projects (often involving other big data initiatives).
NFV on the other hand, will allow CSPs to achieve a higher level of efficiency by interlacing IT
concepts of virtualization and elastic provisioning in a cloud environment. The success of such
initiatives often lies in the seamless integration of CSPs' IT and OT environment something we
highlight as having high-level benefits for CSPs over the next five years and beyond.
M2M, IoT and real-time capabilities present an opportunity for CSPs to layer enhanced value over
connectivity and target a higher share of enterprise and consumer spending in these areas. To
stand a chance over other emerging players, CSPs must make sustained investments to extend
their operational capabilities close to the end user at the edge of the network and offer services that
enhance the value in a business context.
The use of platforms that leverage the power of analytics and real-time infrastructure are required,
alongside engagement with business stakeholders, to develop technology and market road maps
for future growth. We expect "smart initiatives" to become part of the agenda for most enterprises,
as well as government and urban planning committees over the next five years, translating into
multiple opportunities for players offering connectivity, IT infrastructure and associated services.
Other noticeable technologies that will make an impact are cognizant computing, featuring interplay
between smart devices such as wearables, software and contextual information, and DevOps for
improved collaboration between operations and development teams.

Page 7 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Figure 2. Priority Matrix for the Telecommunications Industry, 2014

benefit

years to mainstream adoption


less than 2 years

transformational

2 to 5 years
Context-Enriched
Services
Hybrid Cloud Computing
Network Function
Virtualization
Next-Generation Service
Delivery Platforms

5 to 10 years

more than 10 years

Big Data
Cognizant Computing
DevOps
In-Memory Computing
IoT Platform
Machine-to-Machine
Communication Services
Real-Time Infrastructure

high

Mobile CDN

4G Standard

5G

Mobile DPI

Business Capability
Modeling

Capacity-Planning and
Management Tools

Cloud-Based RAN

Communications Service
Providers as Cloud
Service Brokerages

Mobile Self-Organizing
Networks

Heterogeneous Networks
Hybrid Mobile
Development
Infrastructure as a Service
(IaaS)

Enterprise Architecture
IT/OT Integration

IP Service Assurance

Location-Based
Advertising/LocationBased Marketing

Mobile Advertising

Social Network Analysis

Mobile QoS for LTE

Software-Defined
Networks

Open APIs in CSPs'


Infrastructure
Service-Oriented
Architecture in OSS/BSS
and SDP
Telecom Analytics

moderate

Rich Communication Suite

CSP Network Intelligence

Innovation Management

Mobile Subscriber Data


Management

Managed Mobility
Services

Voice Over LTE

Retail Mobile Payments

Open-Source Telecom
Operations Management
Systems

Subscription Billing

low
As of August 2014
Source: Gartner (August 2014)

Off the Hype Cycle


A number of profiles have been removed from this year's Hype Cycle for the Telecommunications
Industry and new ones added to sharpen the focus for CSPs evolving as providers of experienceled innovation and advanced technology solutions. Additionally, in some cases, the profiles have
been renamed to reflect the coverage better and align with market nomenclatures; for example,

Page 8 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

cross-channel analytics changed to customer journey analytics and network intelligence to CSP
network intelligence.
The profiles being removed are: behavioral economics, big data, bring-your-own-device services
(consumer devices, browser client OS, business impact analysis, cloud computing, cloud
management platforms, cloud master data management [MDM] hub services, cloud UC [UCaaS],
cloud/Web platforms, context delivery architecture, convergent communications advertising
platforms [CCAPs]), telecom end-user experience monitoring, hybrid mobile development, MDM,
mobile cloud, mobile data protection, mobile device management, mobile social networks, mobile
virtual worlds, OneAPI (telecom, OpenFlow, open-source communications, open-source
virtualization platforms, operations support system [OSS]/business support system [BSS]) customer
experience management, personal cloud, social IT management, SaaS, Web analytics, Web
experience analytics, Web real-time communications, and Web-oriented architecture.
New profiles being added are: Cognizant computing, IoT platforms, open-source telecom
operations management systems, 5G, DevOps, in-memory computing, subscription billing,
business capability modeling, managed mobility services, open APIs in CSPs' infrastructure, IT/OT
integration, heterogeneous networks, capacity planning and management tools, CSP network
intelligence, 4G standards, mobile self-organizing networks, voice over LTE, cloud-based radio
access network (RAN), hybrid cloud computing, software-defined networks, innovation
management, mobile QoS for LTE, telecom analytics, next-generation service delivery platforms,
retail mobile payments, mobile subscriber data management, location-based advertising/locationbased marketing, application security as a service, mobile advertising, enterprise architecture, IP
service assurance, and mobile DPI.

On the Rise
Cognizant Computing
Analysis By: Jessica Ekholm; Brian Blau
Definition: Cognizant computing is the next step in the evolution of personal cloud. It uses big data
and simple rule sets in order to increase personal and commercial information about a consumer
through four stages: "Sync Me," "See Me," "Know Me" and "Be Me."
Position and Adoption Speed Justification: In the next few years, cognizant computing and smart
machines will become two of the strongest forces in consumer and business IT. Any company in
the business of providing a service, using apps or selling devices will be affected by cognizant
computing in some way. Gartner predicts that, by 2016, OSs such as iOS, Android and Windows
will no longer define a consumer's choice of smartphone. Cognizant computing also heralds the
next evolution of the personal cloud, as consumers switch their focus away from devices to apps
and the cloud (with negative implications for smartphone vendors such as Apple and Samsung). We
predict that, by 2015, most of the largest companies in the world will be using cognizant computing
to fundamentally change the way they interact with their customers.

Page 9 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

By amalgamating and analyzing data in the cloud from many sources (including apps, smartphones
and wearable devices), cognizant computing will provide contextual insights into how people
behave: what they watch, do and buy, who they meet, and where these activities take place. This
will help companies increase the lifetime value of their increasingly fickle customers, improve
customer care, boost their sales channels, and make their customer relationships more personal
and relevant. In essence, this new development will help companies innovate and create new
business opportunities.
Cognizant computing has four stages:

Sync Me Apps, content and information are made available across devices and shared
contextually.

See Me Data is continuously collected about users and their devices to gain an
understanding of users' context.

Know Me Understanding users' wants and needs, and proactively offering products and
services based on pattern recognition and other machine-learning approaches.

Be Me Developing intelligent apps and services that act on users' behalf.

At the moment, most activity is around the first two stages. As big data and the Internet of Things
(IoT) become more pervasive, the vast amounts of information produced will enable complex
systems to become more "intelligent," offering brand new opportunities in the latter two stages.
This won't be without challenges or risk, however. Critical issues that will have to be addressed
include consumer privacy, quality of execution and becoming a trusted vendor.
Cognizant computing is beginning to take shape via many mobile apps, smartphones and wearable
devices that collect and sync information about users, their whereabouts and their social graph
(mainly in the "Sync Me" and "See Me" stages of development). In addition, we are seeing the first
personal digital assistants appearing with Microsoft Cortana, calendaring apps such as Tempo AI
and, to an extent, Apple's Siri and Google Now. In the next two to five years, the IoT and big data
will meet analytics, and more data will make systems smarter. By 2017, smartphones will handle
some tasks for us better than we can do ourselves. At that point, consumers' personal clouds will
interact with their smartphones and other devices, and the intricate app ecosystems they have
created. The vast amount of very personal data that will flow between users and the brands is likely
to be of some concern to users who do not necessarily want to share this amount of data. As this is
a cause for concern for all involved, it would be advisable to work on how to create a trusted
relationship between the user and the brand at an early stage of the relationship and, of course, to
add strong privacy and security controls at all consumer touchpoints.
User Advice:

Cognizant computing drives innovative analytics, apps, data and devices. Use its four-stage
framework to stage new business models that can be used to identify supporting services
opportunities, app and device features, and as a link to smart machines and the digitalization of
business.

Page 10 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Many cognizant computing services are still evolving, but look for companies that are
developing deep learning, analytics, sophisticated algorithms and location-based technologies.

Evaluate which cognizant computing assets you currently have and which you can either create
in-house or be without; also consider which partnerships you need to go into to create a strong
set of assets over the next 24 months, to reap future revenue.

Business Impact: We predict that most of the world's largest 200 companies will utilize the full
toolkit of big data and analytical tools to refine their offers and improve their customer experience
by 2015. Thus, over the coming two to five years we expect consumer-focused companies to use
cognizant computing techniques to an increasing degree. This in turn will have big impact and
affect the entire ecosystems and value chains across IT.
Technology and service providers that see opportunities within cognizant computing in the early
years will create strong early-mover advantages, meaning they will be better equipped to develop
stronger, more reliable ecosystems and reap early revenue benefits. They will also be able to deal
with threats and issues with greater ease than late movers. At present, in 2014, no vendor has a full
set of cognizant computing capabilities. That said, vendors such as Amazon, Apple, Facebook,
Google and Microsoft have considerable collections of individual capabilities, while a host of
smaller, more niche players such as Anki, Medio and Tempo AI have some interesting propositions.
The big vendors are already racing ahead and consumer-focused businesses that have not yet got
into this space are in danger of falling behind quickly.
Benefit Rating: Transformational
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: Amazon; Apple; Facebook; Google; Here
Recommended Reading:
"Market Trends: Cognizant Computing Will Reshape Mobile and App Market Revenue"
"Market Trends: Get Ahead in the Early Cognizant Computing Market by Smart Consumer
Segmentation"
"Market Insight: Virtual Assistants Will Make Cognizant Computing Functional and Simplify Apps
Usage"
"The Disruptive Era of Smart Machines Is Upon Us"
"Predicts 2014: Cognizant Computing Another Kind of Smart Device"
"Predicts 2014: Consumer Analytics and Personalized User Experiences Transform Competitive
Advantage"
"Smart Machines Mean Big Impacts: Benefits, Risks and Mass Disruption"

Page 11 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

IoT Platform
Analysis By: Alfonso Velosa
Definition: An Internet of Things (IoT) platform enables an enterprise to better use the information
present in its devices. This platform starts by obtaining data from connected devices, and
aggregating and connecting it via a gateway to the cloud to the enterprise's systems, where that
data can be analyzed and converted into insights and action by enterprise employees and systems.
An agile IoT platform should enable enterprises to build solutions with cloud-, edge- or gatewaycentric IoT architectures.
Position and Adoption Speed Justification: There is significant opportunity for businesses to
achieve greater value from the data that is located in devices that are, or will be, spread throughout
the enterprise. Unfortunately, this data has been locked into the devices due mostly to lack of
connectivity but also due to lack of standards, systems and processes to obtain this data
systematically, and even ignorance of the value of the information on those devices.
Enterprises continue to deploy IoT systems, due to benefits like asset optimization, new revenue
models and so on. This includes fleet management options, maintenance optimization analysis of
assets, or charge-per-use models that we see in insurance. Thus, there is an increasing need,
possibility and opportunity for IoT platforms to collect, process, analyze and disseminate the data
from devices in an integrated way. To gain the full value of the data in these systems, enterprises
will need a system that incorporates:

Devices: The device may, or may not, be a part of the platform. It has sensors, processing
capabilities and connectivity to collect data and share it with other systems. It will collect data
with appropriate enterprise contextual elements, such as location and environmental
parameters.

Device Management: The device functionality or application logic and security needs to be
managed; it may also be necessary to format or process some of the data internally. This may
reside on the device, the gateway, the cloud or a combination thereof.

Connectivity: The data will need to be transmitted from the devices to enterprise systems. This
may occur via a gateway.

Application and API Layer: This layer enables enterprises to make the most of devices through
business rules, functions or applications, and/or APIs (to be consumed by other entities) that
the device will need to execute its core functions/applications. This application logic may reside
on the device, the gateway, the cloud or a combination thereof.

Security and Authentication: A comprehensive security and authentication process is required


to protect the integrity of the devices and the data. The process will need to be able to provide
scheduled and ad hoc updates on the security profile.

Analytics and Presentation Layer: The data will need to be analyzed and presented in a
format that facilitates the decision-making and action capabilities of enterprise IT and
operational technology employees and automated systems. This may reside on the device, the
gateway, the cloud or a combination thereof.

Page 12 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

User Advice: System developers will want to look at aspects of the business that may benefit from
integration of the data from devices. Observe that while the technical elements may be quite
challenging, the developers and their management should also consider the cultural elements of the
enterprise just as thoroughly. For example, factor into your plans how the data from an IoT platform
will fit into the work processes for an enterprise, as well as how to incentivize employees to leverage
the data to its fullest potential.
Recognize that data format standards vary by industry, often by vendor and by legacy systems, so
any IoT platform that pulls in enough data will need to be capable of addressing multiple formats
and industry standards. Thus, ensure you understand the value and cost of the proper use of the
data in the system, and how it will cover the costs of any necessary consulting work to audit and
integrate all of your data sources and types.
Business Impact: The value of an integrated system will depend on its ability to leverage industryspecific parameters. An IoT platform will have the potential to help enterprises that implement them
outperform their peers in maximizing the use of information, as well as to extend operational
elements, such as asset management, or create new value chains for the enterprise to drive new
revenue streams and deliver enhanced value to customers.
Benefit Rating: Transformational
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: ARM; Axeda; Eurotech; Jasper; Microsoft; ThingWorx
Recommended Reading:
"Uncover Value From the Internet of Things With the Four Fundamental Usage Scenarios "

Open-Source Telecom Operations Management Systems


Analysis By: Norbert J. Scholz
Definition: This term describes how CSP back-office solutions such as billing, charging, revenue
assurance, fraud management, provisioning, network and inventory management can be
acquired through an open-source license process. Open-source software is available under license
and distribution conditions specified by the Open Source Initiative (http://www.opensource.org). An
open-source license typically permits free use, access to the source code, modification and
redistribution, subject to the conditions of the entity distributing it.
Position and Adoption Speed Justification: Open-source software plays a strong role in laying the
foundations of telecom operations management systems (TOMS) solutions, and many
communications service providers (CSPs) have dedicated internal resources to developing these
solutions in-house. Currently, open-source software is used mainly in the middleware layer to
integrate various platforms. Open-source software on the solution and application level remains in

Page 13 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

the early stages of development. Commercial deployment of open-source TOMS solutions is


minimal and mainly by small CSPs. Among the main concerns about open-source TOMS is the lack
of dedicated product road maps, proven scalability and reliability, competitive differentiation, and
committed product support, all of which are offered under traditional licensing models.
CSPs that are using a licensing model, and are looking for cost reductions and business agility, are
likely to consider open APIs, service-oriented architecture, outsourcing and software as a service.
Either that or they use open-source software in conjunction with such models. Some CSPs are
considering open-source TOMS to reduce high support and maintenance costs and to avoid vendor
lock-in.
The most likely CSPs to adopt open-source TOMS are small ones that do not compete with each
other, and over-the-top providers that are not burdened by legacy back-office infrastructure and do
not consider TOMS solutions to be competitive differentiators. Some large CSPs with strong
internal IT departments might prefer open-source TOMS solutions to avoid exposure to external
suppliers. Finally, system integrators occasionally use open-source TOMS solutions in some
engagements as an alternative to commercial off-the-shelf solutions. Billing and CRM are the most
common open-source solutions.
Open-source TOMS may never reach maturity. It is likely to remain an alternative to commercial
licensing and outsourcing models for a small number of CSPs.
User Advice:

Start using open-source software internally on the middleware and foundational levels.

Assess the vendor's track record, based on reference checks.

Focus on the following categories:

Scalability

Integration

Overall issues of process integrity

Cost savings

Ability to support process modeling

Security

Business Impact: Open-source software is used mainly in the foundational technology of TOMS
solutions. Reducing the "integration tax" remains high on CSP priority lists. But the dearth of
carrier-grade open-source TOMS solutions has compelled most CSPs to stick with conventional
license and outsourced TOMS solutions. There is little incentive for large CSPs to participate in
open-source initiatives because they often consider TOMS a competitive differentiator. Traditional
software vendors benefit from the prevalence of traditional licensing models, which provides them
with an annuity stream for customization and support. Some system integrators are working with
open-source TOMS suppliers to provide more customized solutions for CSPs than are available

Page 14 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

from mainstream software vendors. These factors are likely to delay growth in open-source TOMS
for at least five to 10 years.
Benefit Rating: Moderate
Market Penetration: Less than 1% of target audience
Maturity: Emerging
Sample Vendors: Agileco; Freeside; jBilling; Open BRM; OpenNMS Group; OpenRate; ParaBill;
ProactiveRA; Zenoss

5G
Analysis By: Sylvain Fabre
Definition: 5G is a term being used to describe the next stage of mobile network infrastructure
technology, beyond 4G (Long Term Evolution [LTE] and LTE Advanced [LTE-A]). However, 5G
standards have not yet been defined. Additionally, some of the functionalities that have been
defined beyond 4G are currently being appended to the existing 4G set of standards. 5G
throughput may be faster than 4G's theoretical maximum of 1 Gbps, but the difference may not be
very large due to 4G having approached the limitations of the laws of physics.
Position and Adoption Speed Justification: Currently, because no standards actually exist for 5G,
various lab demonstrations are able to lay claim to some "5G-related functionality." LTE-A is being
worked into the 4G standards, as all standards in the Third Generation Partnership Project (3GPP)
R11 and R12 are still related to LTE-A. Additional working groups include:

Korea 5G Forum

China IMT 2020 (5G) Promotion Group (under MIIT, NDRC and MOST)

Japan 2020 and Beyond AdHoc Group (under ARIB)

Europe METIS, 5GIC, ETSI

While it is still unclear what specific features would be included or should even be prioritized for 5G,
the technological basis for it will likely get laid down in the next three years or so, given the number
of people who claim to be working on it and the amount of interest in 5G. Based on 3GPP history
between generations of successive technologies, deployment of 5G into networks will take seven to
10 years after the start of LTE-A around 2020 to 2023. In fact, NTT Docomo, the three Korean
CSPs (SK Telecom, KT, LG U+) and China have declared their intention to launch 5G commercially
in 2020. With cellular technologies, the definition in standards emerges a long time before
mainstream adoption.
It may be that 5G is not defined as a single global standard. There seems a risk that we'll see
competing approaches just like the cellular situation in China, where the specific TD-SCDMA flavor
of 3G was created as well as the TDD variant of LTE; however, that may not necessarily matter by

Page 15 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

then, as software-defined radio (SDR) maturity could mean that supporting multiple standards will
just be a programming issue.
Despite 3GPP standardization (R14) having not yet started and frequency recommendations by ITU
having not occurred for 5G, the race for the glory of being the first to launch 5G in Asia and/or
worldwide would be a key motivation for some communications service providers (CSPs).
Furthermore, the Tokyo Olympic Games in 2020 should be a driver, as its timing would be perfect
for a launch in Japan.
User Advice:

Focus mobile infrastructure planning on LTE, LTE-A, small cells and heterogeneous networks
(HetNet); 5G is much too embryonic to be a concern for CSPs' planning at least until the end
of the decade. Commercial network equipment could be available by 2020, with commercial
CSP rollouts expected around 2020 to 2023.

Be mindful of the risk of increased hype around 5G, as well as what may or may not constitute
but is being marketed as 5G, in a similar way that 4G has been misconstrued in marketing
over the last few years. Ask vendors to indicate which standard they are building. Until a 5G
standard actually gets defined, vendors' efforts toward 5G are valuable, but can be at best
defined as early lab prototypes only.

Business Impact: Uses can be found and developed for increased bandwidth, but 5G's
incremental value on top of LTE and LTE-A, as well as a mature small cell layer and pervasive Wi-Fi,
may be limited with respect to the deployment costs involved (as is the case with every new
wireless network generation).
Rather, development of 5G standards may focus on the user's perception of unlimited capacity.
Some potential areas for 5G networks that have been considered in research include the following:

Pervasive networks

Cognitive radio

SDR

Internet Protocol version 6 (IPv6)

Wearable devices

5G could be a framework whereby many existing legacy technologies, such as 3G, 4G, Wi-Fi,
HetNet and the small cells layer, are able to better coexist and interwork, using both licensed and
unlicensed spectrum.
Benefit Rating: High
Market Penetration: Less than 1% of target audience
Maturity: Embryonic

Page 16 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Sample Vendors: Alcatel-Lucent; BMW; Ericsson; Huawei; Nokia Solutions and Networks; NTT
Docomo; Samsung; Telecom Italia

DevOps
Analysis By: Ronni J. Colville; Jim Duggan
Definition: DevOps represents a change in IT culture, focusing on rapid IT service delivery through
the adoption of agile, lean practices in the context of a system-oriented approach. DevOps
emphasizes people (and culture), and seeks to improve collaboration between operations and
development teams. DevOps implementations utilize technology especially automation tools that
can leverage an increasingly programmable and dynamic infrastructure from a life cycle
perspective.
Position and Adoption Speed Justification: DevOps doesn't have a concrete set of mandates or
standards, or a known framework (e.g., ITIL or Capability Maturity Model Integrated [CMMI]),
making it subject to a more liberal interpretation. For many it is elusive enough to make it difficult to
know where to begin and how to measure success. This can accelerate adoption or potentially
inhibit it. DevOps is primarily associated with continuous integration and delivery of IT services as a
means of providing linkages across the application life cycle, from development to production.
DevOps concepts are becoming more widespread across cloud projects and in more traditional
enterprise environments. The creation of DevOps teams brings development and operations staff
together to more consistently manage an end-to-end view of an application or IT service. For some
IT organizations, streamlining release deployments from development through production is the first
area of attention; this is where most acute service delivery pain exists.
DevOps practices include the creation of a common process for the developer and operations
teams; formation of teams to manage the end-to-end provisioning and practices for promotion and
release; a focus on high fidelity between environments; standard and automated practices for build
or integration; higher levels of test automation and test coverage; automation of manual process
steps and informal scripts; and more comprehensive simulation of production conditions throughout
the application life cycle in the release process.
Both Dev and Ops look to tools to replace custom scripting with consistent application or service
models, improving deployment success through more predictable configurations. The adoption of
these tools is not associated with development or production support staff, but rather with groups
that straddle development and production, and is typically instantiated to address specific Web
applications with a need for increased release velocity. To facilitate and improve testing and
continuous integration, tools that offer monitoring specific to testers and operations staff are also
beginning to emerge. Another challenge of DevOps adoption is the requirement for pluggability.
Toolchains are critical to DevOps to enable the integration of function-specific automation from one
part of the life cycle to another.
DevOps implementation is not a formal process; therefore adoption is somewhat haphazard. Many
aspire to reach the promised fluidity and agility, but few do. IT organizations leveraging pacelayering techniques can stratify and categorize applications and find applications that could be

Page 17 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

good targets for adoption. We expect this bifurcation (development focus and operations focus) to
continue for the next two years, but as more applications or IT services become agile-based or
customer-focused, the adoption of DevOps will quickly follow. DevOps does not preclude the use of
other frameworks or methodologies, such as ITIL. Incorporating some of these best-practice
approaches can enhance overall service delivery.
User Advice: DevOps hype is beginning to peak among tool vendors, with the term applied
aggressively and claims outrunning demonstrated capabilities. Many vendors are adapting their
existing portfolios and branding them DevOps to gain attention. Some vendors are acquiring smaller
point solutions specifically developed for DevOps to boost their portfolios. We expect this to
continue. IT organizations must establish key criteria that will differentiate DevOps traits (strong
toolchain integration, workflow, continuity, context, specificity, automation) from traditional
management tools.
Successful adoption or incorporation of this approach will not be achieved by a tool purchase, but
is contingent on a sometimes difficult organizational philosophy shift. Because DevOps is not
prescriptive, it will likely result in a variety of manifestations, making it more difficult to know
whether one is actually "doing" DevOps. However, the lack of a formal process framework should
not prevent IT organizations from developing their own repeatable processes for agility and control.
Because DevOps is emerging in definition and practice, IT organizations should approach it as a set
of guiding principles, not as process dogma. Select a project involving development and operations
teams to test the fit of a DevOps-based approach in your enterprise. Often, this is aligned with one
application environment. If adopted, consider expanding DevOps to incorporate technical
architecture. At a minimum, examine activities along the existing developer-to-operations
continuum, and look for opportunities where the adoption of more-agile communication processes
and patterns can improve production deployments.
Business Impact: DevOps is focused on improving business outcomes via the adoption of
continuous improvement and incremental release principles adopted from agile methodologies.
While agility often equates to speed, there is a somewhat paradoxical impact; and smaller, more
frequent updates to production can work to improve overall stability and control, thus reducing risk.
Benefit Rating: Transformational
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: Boundary; CFEngine; Chef; Circonus; Puppet Labs; SaltStack
Recommended Reading:
"Deconstructing DevOps"
"DevOps Toolchains Work to Deliver Integratable IT Process Management"
"Leveraging DevOps and Other Process Frameworks Requires Significant Investment in People and
Process"

Page 18 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

"DevOps and Monitoring: New Tools for New Environments"


"Catalysts Signal the Growth of DevOps"
"Application Release Automation Is a Key to DevOps"

At the Peak
In-Memory Computing
Analysis By: Massimo Pezzini
Definition: Gartner defines in-memory computing (IMC) as an architecture style that assumes all the
data required by applications for processing is located in the main memory of their computing
environments. In IMC-style applications, hard-disk drives (HDDs; or substitutes such as solid-state
drives [SSDs]) are used to persist in-memory data for recovery purposes, to manage overflow
situations, to archive historical data and to transport data to other locations, but not as the primary
location for the application data.
Position and Adoption Speed Justification: Technology advancements have dramatically reduced
the cost of main memory (currently DRAM), to the point of making it technically feasible and
economically affordable to store in-memory multiple terabytes of data.
IMC-style applications deliver significant improvements in performance, scalability and analytic
sophistication over traditional architectures enabled by a range of software technologies,
including in-memory DBMSs (IMDBMSs), in-memory data grids (IMDGs), event-processing
platforms, high-performance messaging infrastructures and in-memory analytics tools. These
technologies have reached a notable degree of maturity and adoption; some (for example, IMDGs)
have been in the market for over a decade and in some cases (for example, in-memory analytics
tools) boast installed bases in the tens of thousands of clients. However, the maturity of IMC-style
packaged applications varies considerably by domain.
IMC opens up a number of opportunities, which would be simply unthinkable by using traditional
architectures, including, but not limited to:

Web-scale applications that support global operations (for example, e-commerce, online
entertainment and travel reservation systems) and new digital business models (for example,
API economy and cloud services).

Improved situation awareness in business operations by injecting real-time analytics, stream


processing and other operational business intelligence capabilities in transactional business
processes.

Faster delivery of business intelligence reports, interactive and unconstrained data navigation,
and self-service analytics enablement.

Support for "fast and big" data analytics scenarios.

Page 19 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Therefore, IMC will have a long-term, disruptive impact by radically changing users' expectations,
application design principles, products' architectures and vendors' strategies.
Initially pioneered by leading-edge organizations in financial services, telecom, defense and the
Web industry, IMC is rapidly moving onto the radar screen of mainstream organizations that are
more sensitive to new technologies' perceived business value than to technology shrewdness.
IMC adoption across vertical sectors, geographies and business sizes will notably grow, favored by
factors such as:

The insatiable demand for greater speed and scale driven by digital business requirements.

Eagerness for deeper and more timely analytics.

Dramatic reduction in main memory hardware costs.

Continuous maturation of the enabling technologies and their consolidation into broad and
better-integrated suites of IMC capabilities.

Endorsement of IMC architectures by package application vendors, SaaS providers and


application infrastructure (such as portal products, content management platforms, BPM tools,
integration platforms, application platforms) vendors.

Emergence and maturation of open source IMC-enabling technologies.

However, several factors will continue to slow adoption:

Technology and vendor landscape fragmentation.

Lack of commonly agreed upon industry standards.

Scarcity of skills, and still not fully formalized industry best practices.

New security, high-availability, disaster recovery and IT operation challenges.

The efforts and costs associated with re-engineering established (custom or packaged)
applications for IMC.

The long time it will take for application providers to come out with new native IMC versions of
their products that will take several years to mature.

Consequently, overall IMC adoption and maturity still lags the hype and vendor marketing. For this
reason, we see it approaching the Peak of Inflated Expectations.
User Advice: Application architects and other IT leaders in charge of defining and implementing
application architectures to support strategic initiatives (such as Web-scale or digital business)
should identify on the basis of their organizations' desired business outcomes, risk profile,
willingness to invest in IT innovation, technology environment and available skills which of the
following approaches is the best path for IMC adoption in their organizations:

Page 20 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Developing new custom (or purchasing packaged) applications conceived from inception on the
basis of IMC design principles. This "native IMC" style may lead to transformational business
benefits, but also exposes the organization to higher risks of failure.

Replatforming traditional applications on top of an in-memory data store (IMDBMSs or IMDGs).


This "retrofitted for IMC" style is the less invasive, but usually leads to only incremental benefits.

Re-engineering established applications by adopting IMC design principles and technologies


only in part, by extending or partially reworking the application logic but only in sections where
IMC can provide the improvements mandated by the business requirements.

Many providers will retrofit and/or rearchitect for IMC-established products and cloud services.
Therefore, IT leaders should monitor vendor road maps to identify how this might impact their
investment plans.
For many mainstream organizations, entering into IMC by adopting a native IMC style may prove
too complex and risky. Therefore, unless business imperatives set different priorities, it may be
advisable to incrementally familiarize with IMC design principles, technologies, and the new
governance and management challenges by successfully deploying a few IMC applications based
on less-challenging IMC styles before embarking on more ambitious, native-style projects.
Business Impact: IMC-style applications may drive transformational business benefits by enabling
IT leaders to:

Deliver orders-of-magnitude-faster performance for analytical and transaction processing


applications.

Support Web-scale/global-scale business models (for example, mobile banking, e-commerce,


online gaming, travel reservation and API-enabled businesses) supporting hundreds of
thousands or millions of globally distributed, possibly mobile-enabled users (clients, patients,
citizens, business partners) interacting in real time.

Provide deeper and greater real-time business insights and situation awareness.

Organizations leveraging IMC are better-positioned to build defensible business differentiation than
those sticking with traditional architectures. Organizations that fail in endorsing IMC risk falling
behind in the race for leadership in the digital era.
Benefit Rating: Transformational
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: GigaSpaces Technologies; IBM; Magic Software Enterprises; Microsoft; Oracle;
Pivotal; Qlik; Relex; SanDisk; SAP; Software AG; Tibco Software; Workday; YarcData
Recommended Reading:

Page 21 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

"Taxonomy, Definitions and Vendor Landscape for In-Memory Computing Technologies"


"The Spectrum of IMC Styles Meets the Spectrum of Business Needs"
"Hybrid Transaction/Analytical Processing Will Foster Opportunities for Dramatic Business
Innovation"

Subscription Billing
Analysis By: Norbert J. Scholz
Definition: Subscription billing is an extension of service billing. It is also known as "recurring
billing," "cloud billing," "software-as-a-service (SaaS) billing," "activity-based billing," "dynamic
revenue management" and "revenue life cycle management." It is more complex than accountingor payment-processing-based service billing in that it allows for usage-based billing together with
recurring billing for subscriptions, rather than charging for one-time transactions or static
subscriptions.
Position and Adoption Speed Justification: Subscription billing tools differ from IT chargeback
and IT financial management tools by using resource usage data (similar to call detail records
[CDRs] in the communications industry) to calculate the costs for chargeback and aggregate them
for a service. Alternatively, they may offer service-pricing options (such as per employee or per
transaction) independent of resource usage. When pricing is based on usage, these tools can
gather resource-based data across various infrastructure components, including servers, networks,
storage, databases and applications. Service-billing tools perform allocation based on the amount
of resources (including virtualized and cloud-based) allocated and used by the service, for
accounting and chargeback purposes.
Service-billing costs are based on service definitions and include infrastructure and other resource
use costs (such as people-related costs). As a result, they usually integrate with IT financial
management tools and IT chargeback tools. These tools will be developed to work with service
governors to set a billing policy that uses cost as a parameter, and to ensure that resource
allocation is managed based on cost and service levels. Due to their importance to businesses,
these tools have been deployed in service provider, cloud environments and by IT organizations
that use or deploy applications, such as e-commerce applications.
For communications service providers (CSPs), subscription billing for voice has always been part of
their billing system. Subscription billing for content is becoming increasingly important. Many
existing billing systems cannot adapt to the requirements of recurring content billing requirements
or dynamically combine voice and data subscriptions.
Subscription billing is usually provided on a SaaS basis. Solutions increasingly resemble
subscription management for e-commerce solutions. Subscription billing can contain the following
elements: real-time rating, mediation, allowance management, product catalogs, analytics and
dashboards, order management and provisioning, customer self-service, invoicing, payments,
promotion and campaign management, product management, settlement, collections and others.
Subscription billing enables activities such as multitier pricing, multiple revenue streams per

Page 22 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

customer, service and product bundling, usage caps, entitlements, personalization, cross-product
discounts, and promotions.
User Advice: Consider two scenarios for subscription billing:
Bill for your own and third-party services.

Evaluate the scalability of subscription-billing solutions and their ability to combine usagebased billing with recurring charges. Billing systems that support only recurring charges are
usually inappropriate for CSPs' requirements.

Ensure that the solution integrates well with other front- and back-office solutions.

Ascertain that nontechnical staff can easily use the solution.

Have a contingency plan in case the vendor no longer exist in its current form. Many vendors
are small and funded by venture capital.

Make subscription billing available to your small and midsize enterprise clients.

Offer subscription billing just like any other IT or value-added service in a cloud-based
environment, for a monthly charge.

Business Impact: In general, subscription billing tools are critical to running IT as a business. They
provide the means to determine the financial effect of sharing IT and other resources in the context
of services. They also feed billing data back to IT financial management tools and chargeback tools
to help businesses understand the costs of IT and to budget appropriately. These tools also provide
better cost transparency and governance in a public cloud environment.
CSPs already have specific billing systems in place that can handle many of the requirements met
by subscription-billing solutions. In general, subscription-billing solutions are desirable, but not
crucial unless CSPs' existing systems cannot adapt to the requirements of recurring billing
requirements. Subscription billers might challenge established CSP billers in the medium to long
term because they tend to offer more coherent and flexible solutions.
Benefit Rating: Moderate
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: Accumulus; Aria Systems; Cerillion; Comarch; MetraTech; Monexa; Omniware;
Recurly; Redknee; SAP; Transverse; Vindicia; Zuora

Business Capability Modeling


Analysis By: Neil Osmond

Page 23 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Definition: Business capability modeling is a technique for representing the ways in which
resources, competencies, information, processes and their environments can be combined to
deliver consistent value to customers. Business capability models are one way to represent the
future-state capabilities of a business, as well as to provide a platform for illustrating how current
capabilities and business assets (people, processes, information and technologies) may need to
change in response to strategic challenges and opportunities.
Position and Adoption Speed Justification: Communications service provider (CSP) leaders face
significant business challenges as the decline of their companies' core revenue accelerates due to
commoditization, substitution by over-the-top services and unfavorable regulatory environments.
Although data revenue is growing, margins are declining due to falling prices. To survive, CSPs will
need to grow through business diversification and consolidation. They will also need to grow their
data business profitably, while improving the customer experience and customer loyalty.
Many CSPs are looking to use a new set of digital services to drive growth. To do this successfully,
they will need to establish relevant synergies across their organization. The premise is that digital
services models are not the same as traditional CSP service models for service design,
implementation, delivery and support, where services are enabled from deep within the network
layer. This has significant implications for a CSP's IT organization, which will require different skills,
competencies and capabilities.
The concept of expressing business capabilities and business capability modeling is not new. In
fact, business academics and practitioners have been talking about modeling business capabilities
for years. As a result, there are many definitions. However, this approach is mainly being adopted
by enterprise architects who are proactively trying to mature their enterprise architecture efforts to
engage business leaders.
Over the past year, Gartner has engaged with CSP clients during inquiry sessions, one-to-one
meetings and workshops on how to use business capability modeling as a platform to inform and
guide decision making between business and IT executives, especially as they look to transform IT
to assist digital business. We therefore position business capability modeling slightly before the
Peak of Inflated Expectations.
User Advice: Create a future-state business anchor model as part of the development of a business
outcome statement and an enterprise context (see "Define the Business Outcome Statement to
Guide Enterprise Architecture Efforts").
Consider using business capability modeling as a technique for representing the organization's
future state (see "Eight Business Capability Modeling Best Practices"), along with other possible
models (such as business process, operating and functional models).
Once a future-state model exists, CSPs can use business capability modeling as a platform for
creating both diagnostic and actionable deliverables (see "Use Business Capability Modeling to
Execute CSP Digital Services IT Strategies"). Deeper, detailed business capability models may be
used to illustrate specific decisions within information, business, solution and technology
architecture viewpoints (see "Toolkit: Using Business Capability Modeling to Execute CSP Digital
Services IT Strategies").

Page 24 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Business Impact: Business capability modeling is of "high" benefit because it enables CSPs'
business and IT strategic planners to engage in business planning and understand the impact of
associated decisions on business and IT. The value of this modeling is principally that it helps them
focus on and explore business direction and plans. It can also help them focus on and illustrate
investment decisions. They can then link these decisions to architectural changes.
An equally important benefit is that this modeling enables enterprise architecture practitioners to
have objective discussions about business capabilities without drilling down into technology,
people, processes and information details. Drilling down into details too early can derail discussions
about business direction and strategy, and organizational optimization.
Benefit Rating: High
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Recommended Reading:
"Business Capability Modeling Brings Clarity and Insight to Strategy and Execution"
"Business Capability Modeling Helps Mercy Execute on Business Transformation"
"Eight Business Capability Modeling Best Practices Enhance Business and IT Collaboration"
"Starter Kit: Business Capability Modeling Workshop"
"Toolkit: Business Capability Modeling Starter Kits for Multiple Industries"
"Use Business Capability Modeling to Illustrate Strategic Business Priorities"
"To Assess the Impact of Change, Connect Process Models With Business Capability Models"

Managed Mobility Services


Analysis By: Eric Goodness
Definition: Managed mobility services (MMS) encompass the IT and process services, provided by
an external service provider, required to plan, procure, provision, activate, manage and support
mobile devices, network services and mobile applications. The devices under management include
smartphones, tablets and point-of-service equipment. The market is weighted toward the
management and support of corporate-owned devices; however, managed mobility services
include IT and process services and systems in support of bring your own device (BYOD).
Position and Adoption Speed Justification: The market for third-party MMS is fragmented and
lacks clear innovation and leadership. However, Gartner sees MMS as an important response to
offloading the tactical management and support issues that many businesses are unable, or
unwilling, to address for the rapidly expanding portfolio of smartphones, tablets and point-of-

Page 25 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

service devices. In addition to new pure-play MMS companies emerging globally, traditional system
integrators, IT outsourcers and communications service providers are using MMS as a portfolio
offering to expand into a market adjacency related to their core capabilities.
Despite the rapid growth of MMS, the market will require some time to achieve mainstream
adoption. The principles and requirements for sourcing IT and process services to service providers
is well-understood; however, in the mobile estate, users are still struggling with tactical issues
related to adoption, access and enablement. A decision to "outsource mobility" is an exercise in
cost avoidance not cost reduction. Therefore, organizations will continue to distill their
requirements by geography and specific uses cases. Consequently, we expect MMS to begin being
introduced as an add-on, value-added service ranging from help desk services to application
development and integration. As user requirements and service provider capabilities become
clearer and better-understood, we expect broader and more rapid adoption during the five- to 10year time frame.
User Advice: Companies should prepare bid documents that outline what is required from
managed mobility services in terms of outcomes and critical success factors by geographic
region because service providers and service catalogs vary. Additionally, user organizations should
view MMS as similar to other outsourcing projects; therefore, prepare SLAs and plan for internal
support as necessary. Any investment in MMS should be with a company that invests in internal
and external service delivery capability and relationships.
Business Impact: Mobility managed services span tactical and strategic business value related to
how mobile technologies and connectivity support internal and external communications,
collaboration, and commerce. Gartner views the following use cases as driving the MMS market:

Managed BYOD

Resource and cost visibility and control

Managed logistics and mobility management integration

Mobility outsourcing

Business extension and process transformation

Benefit Rating: Moderate


Market Penetration: 1% to 5% of target audience
Maturity: Adolescent
Sample Vendors: AT&T; DMI; Econocom; Enterprise Mobile; Intermec; Motorola Solutions; Orange;
Stratix; Unisys; Vodafone Global Enterprise; Vox Mobile
Recommended Reading:
"Magic Quadrant for Managed Mobility Services"

Page 26 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

"Nexus of Forces: How Mobility Impacts and Challenges IT Services Providers to Create Innovative
Business Models"
"Competitive Landscape: Telecom Expense Management, Independent Providers"
"Technology Overview for Emerging Mobile Device Management Services in Asia/Pacific"

Open APIs in CSPs' Infrastructure


Analysis By: Mentor Cana
Definition: Deploying open and standardized APIs based on service-oriented architecture (SOA)
principles enables communications service providers (CSPs) to expose their capabilities for broader
use by a multitude of internal and external applications and services. The open API layer shields the
complexity of the network and IT infrastructure, and enables CSPs to participate in the broader
digital ecosystem.
Position and Adoption Speed Justification: The increase in the use of mobile devices and mobile
data, as well as the sophisticated over-the-top (OTT) services consumed by CSPs' subscribers, has
put pressure on CSPs to innovate. It has also put pressure on them to find new business models to
monetize their existing assets, and to develop and offer new services and solutions to their
customers.
CSPs have traditionally built and offered services and solutions with a siloed approach by direct and
deep integration of network and IT capabilities specific to each service or solution. The siloed
approach limits the reuse of capabilities across services and ties the use of specific network and IT
capabilities to their own services and solutions. Exceptions to this are billing, payment, messaging
and location-based service APIs that CSPs have exposed for some time now, often with proprietary
interfaces.
CSPs are starting to expose their capabilities via open APIs for internal integration and operational
efficiency, for end-user services to be consumed directly by their customers, or to expose them to
OTT players, enterprises, third-party developers and API aggregators, and other CSPs. The
exposure of network and IT infrastructure capabilities via APIs by CSPs is accelerating, mostly
focused on the integration and enablement of IT and data-centric digital services, as well as by the
adoption of network function virtualization (NFV) and SDN technologies to open up network
capabilities. SDN and NFV expose northbound interfaces for different classes of applications, from
lightweight OpenFlow "drivers," all the way to fully baked application-server platforms with
advanced controller logic, state synchronization, high availability, and OpenStack analytics. These
enable newer services such as bandwidth calendaring and chaining functions together on
hypervisors to create composite services (such as HD on demand) or virtual customer premises
equipment (CPE) like set-top boxes with network DVR capabilities.
The adoption and deployment of open APIs by CSPs has moved further toward the Peak of Inflated
Expectations. Open APIs have emerged as vehicles and enabling business channels, making it
possible for CSPs to participate in the broader ecosystem.

Page 27 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

User Advice:

Leverage your broader assets proactively by developing and opening up your network and IT
capabilities via open APIs.

Go beyond APIs for messaging, billing and charging for content, device and user information,
as well as location services and some advertising capabilities.

Include traditional voice and OTT voice capabilities, presence, bandwidth, latency, throughput
on demand, access to content delivery networks, machine to machine, and cloud storage and
compute services on demand, as well as analytics.

Expose your capabilities as RESTful APIs as it has emerged as the predominant way to expose
capabilities to northbound systems.

Proactively deploy NFV and SDN capabilities with the intent to expose network on demand or
network as a service capabilities via open APIs.

Focus on two aspects:

Internal consumption for integration, agility, flexibility, operational efficiency and development of
enterprise-focused solutions

External consumption to enable you to innovate and to interject yourselves into the OTT and
digital ecosystem revenue stream

Partner with API aggregators and brokers, other CSPs and vertical solutions enablers to cocreate value for your mutual customers.

Assess internally whether your existing organizational structure supports your API initiatives and
API development models. Open APIs will require the congruent organizational structure, skill
sets, funding and leadership.

Business Impact: Exposing network and IT capabilities through open APIs is not the end goal for
CSPs, because there is limited direct revenue from APIs.
Externally focused open APIs are the means through which CSPs can continue to be part of a fastchanging digital market where CSPs' capabilities can play a role if they are easily and more readily
accessible by third-party developers, OTT service providers and other CSPs. The external exposure
needs to be matched by CSPs' internal flexibility and agility. The exposure of infrastructure
capabilities via open APIs for internal system integration and operational efficiency will enable CSPs
to use APIs to build solutions for enterprise customers, reduce development costs, enable faster
time to market, reduce integration complexity, increase reuse, and integrate network and IT
capabilities. In essence, CSPs are not only using APIs to expose capabilities, they are also using
open APIs as a development model.
The benefit for CSPs is high. The increased internal and external agility and innovation enabled via
open APIs will thrust CSPs forward. It enables CSPs to run at the speed of OTTs. It positions them
to become important participants in the emerging digital ecosystem and allows them to capture
new revenue streams.

Page 28 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Benefit Rating: High


Market Penetration: 1% to 5% of target audience
Maturity: Adolescent
Sample Vendors: Aepona; Alcatel-Lucent; Apigee; Huawei; Layer 7; Mashery; Neustar; Nexmo;
Oracle; Tail-f Systems; Tropo; Twilio
Recommended Reading:
"The API Development Model Offers CSPs New Opportunities and an Entry Into the Digital
Ecosystem"
"The Importance of Open Platform Strategies for CSPs Joining Emerging Digital Ecosystems"
"CSPs' Digital Business Requires a Bimodal IT Transformation Strategy"

IT/OT Integration
Analysis By: Kristian Steenstrup
Definition: Operational technology (OT) is hardware and software that detects or causes a change
in physical devices, processes and events in the enterprise. IT/OT integration is the end state
sought by organizations (most commonly, asset-intensive organizations, but any with embedded
and control systems on plant, infrastructure or specialized equipment) where, instead of a
separation of IT and OT as technology areas with different areas of authority and responsibility,
there is integrated process and information flow.
Position and Adoption Speed Justification: Integration will be achievable after alignment
exercises to arrive at common standards for platforms, security and software portfolio
management. Truly integrated approaches to IT and OT are difficult to achieve because of the
deeply rooted tradition in business where engineers and operations staff have been the "exclusive
owners and operators" of OT. As IT and OT converge (become more similar), there are changes in
how a digital business manages IT and OT. Traditional "ownership" becomes a shared
responsibility, even though accountability for operations may not shift. Clear opportunities and
demonstrable benefits exist when integrating the systems in a way that information can be shared
and process flows are continuous, with coherence and no arbitrary interruptions. A more agile and
responsive organization is the result. The data from OT systems will be the fuel for better decision
making in areas such as operations (adjusting and responding to production events) and reliability.
Few organizations have a mature, systemic approach to IT/OT integration. For most, there may be
touchpoints, but IT and OT are almost always managed by separate groups with different
approaches to technology and different vendors in use. Significant barriers exist, from entrenched
positions and attitudes on the IT and engineering sides of the company. In some industries, such as
utilities, we see a reverse integration in the sense that OT systems seek access and integration with
commercial systems (IT) to improve process performance. In short, the flow of data can be both

Page 29 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

ways. Most often we see this initiated by our IT clients; however, it is common that the business will
seek integration when faced with challenges in cybersecurity, disaster recovery or software
administration.
User Advice: Understand the IT/OT convergence in your industry and company first. Get
consensus across groups and with senior management that something must be done, then look to
create a program of alignment. After that, you can commence to progressively have a more
integrated approach to technology, regardless of whether it is IT or OT. This integration should
extend at least to the exchange of data and the maintenance of the platforms, with particular
attention to communications, security and enterprise architecture. In some organizations, this will
result in a fully integrated staff who no longer delineate between IT and OT.
Business Impact: The benefits for asset-intensive digital businesses will be an organization much
more capable of managing, securing and exploiting information and processes. For example, a
company might implement a basis for better reliability and maintenance strategies through more
direct access to condition and usage data for plants and equipment. Operational intelligence will
provide better production management, control and responses to events in the supply chain and
production processes.
Benefit Rating: High
Market Penetration: 1% to 5% of target audience
Maturity: Adolescent
Sample Vendors: Eurotech; RTI; ThingWorx
Recommended Reading:
"Realize the Benefits of IT and OT Alignment and Integration"
"A Methodology for IT/OT Management Strategic Planning"
"2014 Strategic Road Map for IT/OT Alignment"
"Agenda Overview for Operational Technology Management in a Digital Business, 2014"
"Establish Common Enterprise Semantics for IT/OT Integration"

Communications Service Providers as Cloud Service Brokerages


Analysis By: Gregor Petri; Tiffani Bova
Definition: In a cloud service brokerage (CSB) role, communications service providers (CSPs) act
as value-added intermediaries between third-party cloud service providers and end-user
customers, either directly or via their extended resale channels.

Page 30 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Position and Adoption Speed Justification: CSPs can offer any combination of the three roles of
CSB (aggregation, integration and customization), but currently tend to focus mostly on aggregation
of internal and external software as a service (SaaS) and some infrastructure as a service (IaaS).
Many CSPs have already begun to invest in their own build-out of cloud services, such as IaaS (see
"Mapping CSPs' Four Routes to Cloud Success" for examples), or have acquired hosting
companies (for example, Verizon integrating Terremark, or CenturyLink buying Savvis and later Tier
3) to extend their capabilities in this area. They are using these as the anchor products in their
portfolios, which they augment with brokered cloud services from external providers (such as
platform as a service and SaaS).
By taking a single-contact CSB role, CSPs can also help customers govern their use of multiple
cloud services more easily by offering management services such as usage reporting, a unified
catalog or standardized user, single billing and identity management services. In turn, this allows
the telcos to increase their own service offerings through new partnerships. Many CSPs have a
large existing customer base into which they are able to cross-sell and upsell value-added cloud
services on top of existing connectivity and wireless contracts and in place of software licenses
for small or midsize business (SMB) solutions that several providers have resold in the past. Many of
these customers are SMBs consuming cloud services at a faster rate (but on a more modest scale)
than enterprise customers in certain markets and regions (see "Marketing Essentials: Strategic
Options for Bringing Cloud Computing Solutions to Midsize Businesses").
As SMBs generally lack broad internal IT resources, CSPs acting as CSBs can provide clear
benefits to their customers. These include making it easier, safer and more productive for
companies to navigate, integrate, consume, extend and maintain cloud services particularly
when they span multiple, diverse cloud service providers. Initial added value is typically delivered
through CSB integrated billing and CSB single sign-on. Additionally, large enterprises are interested
in cloud service brokering, but typically want more autonomy over their selected solutions and
prefer integration into their existing portals. CSPs can also cater to these needs, but typically do so
through their established IT services and outsourcing divisions, following similar efforts of dedicated
IT service providers such as Infosys, CSC and Fujitsu.
The CSB approach enables CSPs to present a unified catalog of services (of both the CSP's own
and third-party-delivered services) through a self-service channel. As well as improving customer
experience, this also reduces a CSP's onboarding and support costs, and improves provisioning
times for new services by connecting users to the CSP's service delivery platform. Potentially, this
also provides CSPs with a means to create new vertical industry-specific packages and service
bundles.
Initially, the brokerage role has been mainly picked up by CSPs outside the U.S.:

Orange Business Services (Cloud Pro)

Telefonica (Aplicateca)

Telstra (T-Suite software)

Page 31 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

SingTel (PowerON apps)

KT (ucloud SaaS)

Bell Canada (Business Apps Store)

Deutsche Telekom (Business Marketplace)

And more recently:

KPN (Open Cloud Store an SMB-focused offering sold through channel partners, and Grip
a directly sold enterprise offering)

Turkcell (Super Bulut)

TeliaSonera (has announced cooperation with AppDirect)

However, a number of U.S.-based CSPs such as AT&T, Sprint, Verizon, cable provider Comcast
and ISP EarthLink have been making progress on delivering multiple cloud services in a more
seamless way via a CSB marketplace.
Over the last year, having a CSB offering has continued to be a target for CSPs, keeping it right at
the Peak of Inflated Expectations. As most of these offerings are still in their first iteration, they may
disappoint both consumers (from a functional/pragmatic perspective) and providers (from a
commercial revenue perspective). However, given the potential of the CSB model, Gartner
maintains the expectation that these services will reach the Plateau of Productivity within three to
10 years.
User Advice: Even though cloud computing is, in most cases, not yet a full replacement for inhouse IT, the use of public cloud services will reduce the need for an in-depth knowledge of
selected in-house IT skills at end-user organizations. However, this may create a vacuum in
knowledge and skills that individual cloud service providers are not able or prepared to fill (see
"Cloud Adoption at Risk Without Big Channel Investments"). To address this, end-user customers
can turn to CSPs adopting a CSB role. Users need to be aware that the level of involvement of
CSPs in CSB can vary significantly; some are combining or integrating third-party cloud services
with their own IT and business process services, or taking a key role in the delivery of third-party
cloud services, while others are merely taking on a reselling role.
Customers will need to:

Confirm that CSPs offering CSB products deliver the expected functionality and ease of use

Add the required assurance around reliability, manageability and security (still a major inhibitor
to cloud services for many IT departments)

Aggressively address compliance and privacy concerns for example, by guaranteeing the
location of data

In "Deutsche Telekom Brings the Cloud to European SMBs,"we described a specific CSP approach
that is not being adapted to address the enterprise segment.

Page 32 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Business Impact: As cloud computing proliferates, end-user organizations will build trusted
relationships with a small number of providers, offering them cloud services that cover large areas
of their overall solution portfolios. By being able to offer end-to-end SLAs contracts that fit local
legal requirements and offer secure, reliable connectivity to (or even hosting of) remaining legacy
systems CSPs have a unique starting position in the brokerage market, particularly in a CSB
aggregation role.
However, significant partnering, integration, marketing and legal work around the solutions will have
to be done to simplify and automate the sourcing process for end-user organizations, and to
remove some of the risk of sourcing from the cloud.
Benefit Rating: High
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: Bell Canada; Deutsche Telekom; KPN; Orange Business Services; SingTel;
TeliaSonera; Telstra; Turkcell
Recommended Reading:
"Tech Go-to-Market: Four Strategic Options for CSPs to Explore Cloud Computing Opportunities"
"Mapping CSPs' Four Routes to Cloud Success"
"Predicts 2014: Cloud Services Brokerage"
"High-Tech Tuesday Webinar: Brokering Cloud Value, a CSP Guide to the Digital Industrial
Economy"

Heterogeneous Networks
Analysis By: Akshay K. Sharma; Deborah Kish
Definition: A heterogeneous network (HetNet) is a carrier-managed network where a mixture of
macrocells and small cells (3G, Long Term Evolution [LTE] and carrier-grade Wi-Fi) are employed to
provide coverage, with handoff capabilities shared between them. Unlike ad hoc Wi-Fi access
points, these are carrier-managed solutions providing quality of service (QoS) policy enforcement
and optimization of network characteristics (such as beam-forming, among others).
Position and Adoption Speed Justification: As mobile video becomes more prevalent on mobile
devices, techniques to offload traffic from the macrocellular network to local radio network solutions
such as small cells or Wi-Fi become more important. This saves on expensive macrocellular
resources and the backhaul network.
HetNet deployments are underway especially for hybrid femtocell-Wi-Fi units, as well as picocellWi-Fi units. These typically serve a city block or campus and also offer unique functions for Wi-Fi

Page 33 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

offload and session continuity across devices, providing a seamless roaming experience. The speed
of adoption will coincide with small cell adoption, which is already in the Trough of Disillusionment.
Market accelerators for HetNet solutions will be CSP-hosted services with support for session
continuity across macrocellular, small cells and Wi-Fi. This will result in applications such as:

Concurrent operator-provided SMS/MMS and video services with nonoperator services over
Wi-Fi.

Regulatory-compliant operator services such as emergency calls, with data sessions over Wi-Fi
that send pictures simultaneously to show what's occurring.

Operator-controlled QoS for services such as voice, which is concurrent with best-effort Wi-Fi
services such as video streaming.

Local Internet Protocol (IP) access, where local routing of cellular data and Wi-Fi traffic occurs
in the combined small cell/Wi-Fi access points (APs). This allows for savings in backhaul and
reduced latency, when content is shared between cellular and Wi-Fi devices.

HetNets can also be deployed without Wi-Fi or with Wi-Fi as separate networks. When combined
with a common policy framework and the Evolved Packet Core managing cellular and Wi-Fi
sessions, however, CSPs can offer holistic control of sessions with seamless roaming and QoS
enforcement.
User Advice: QoS and back-office functionality is required to differentiate HetNets from Wi-Fi hot
spot providers. Supporting seamless roaming with QoS and policy enforcement requires
deployment of service-aware solutions with the ability to provision, manage, authenticate and bill
accordingly.
Business Impact: Market consolidation is underway. Cisco acquired Ubiquisys and partnered with
NEC on macrocellular base stations; Ericsson acquired Belair, which promises further HetNet
offerings with Wi-Fi capabilities, in its newer Dot offering; and vendors such as Alcatel-Lucent and
Nokia are offering small cell solutions with Wi-Fi solutions via partnerships with firms such as
Ruckus Wireless.
Longer term, HetNets will have a significant business impact by enabling:

Lower-cost connectivity from lower-cost units, with inexpensive backhaul as well as cost power
and space requirements compared to macrocellular towers

Better-quality communications with greater coverage through more units deployed, including
better in-building coverage, due to closer proximity to users compared to macrocellular towers

With Hot Spot 2.0 for Wi-Fi-based HetNets, newer services can be offered. These could include
seamless session continuity and tiered services on-demand, such as HD experiences over Wi-Fi.
Benefit Rating: High
Market Penetration: 1% to 5% of target audience

Page 34 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Maturity: Emerging
Sample Vendors: Alcatel-Lucent; Cisco; Ericsson; NEC; Nokia; Ruckus Wireless
Recommended Reading:
"Competitive Landscape: CSP-Managed Small Cells Offer More Than Just Offload Capabilities"

Hybrid Mobile Development


Analysis By: Ian Finley
Definition: Hybrid mobile development allows developers to use Web skills and technologies to
build mobile apps that can be loaded into an app store, downloaded to a mobile device and used
like a native mobile app. Hybrid technologies allow code reuse across multiple, incompatible mobile
OSs and can yield apps that approach the look and feel of native applications.
Position and Adoption Speed Justification: Hybrid mobile development provides a productive
and cost-efficient way to build highly interactive mobile apps for the many mobile devices and OS
versions in use by customers, partners and employees. While many of the games, media and social
apps that are the most popular in consumer app stores were developed as native apps, the
popularity of hybrid development continues to increase, especially within enterprise application
development groups.
Given the potentially high cost and complexity of native mobile app development, many enterprises
would prefer to use hybrid mobile development whenever possible. Many enterprises have
standardized on Web development as their primary way to build new customer, partner and
employee applications used via PCs. With the growth of mobile device use, many have also built
mobile websites using the same Web assets, skills and technologies. Hybrid development leverages
these assets, skills and technologies for building compelling mobile apps that look and feel similar
to those built via native mobile development. Unlike native code, hybrid code is highly portable
across different mobile OSs. All the major mobile application development platform (MADP)
vendors, other than Apple and Google, have responded to enterprise needs and embraced hybrid
mobile development.
User Advice: When a rich mobile website will not meet business requirements, consider building a
mobile app with hybrid or native mobile development. Hybrid development is favored when cost
efficiency and broad device support are high priorities. Native development is favored in the most
demanding apps, where UI reactivity, graphics performance and processing speed are of
paramount importance. Keep in mind that for more demanding apps, hybrid development might be
a feasible option, but has no cost or productivity advantage over native development. Consider
using a MADP that supports mobile Web, hybrid and native development, but don't expect a single
MADP to be the best choice for all your mobile websites and apps.
Business Impact: Mobile apps built using hybrid mobile development can help enterprises improve
customer engagement and employee and partner productivity and collaboration sometimes quite
dramatically.

Page 35 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Benefit Rating: High


Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: Adobe PhoneGap; Apache Cordova; IBM; salesforce.com; SAP; Telerik;
Trigger.io
Recommended Reading:
"Bridge the HTML5, Native Apps Gap With Hybrid Approach"
"Increase Mobile Team Effectiveness by Understanding Web, Native and Hybrid Trade-Offs"
"Taxonomy, Definitions and Vendor Landscape for Mobile AD Technologies"
"Magic Quadrant for Mobile Application Development Platforms"

Social Network Analysis


Analysis By: Carol Rozwell
Definition: Social network analysis (SNA) tools are used to analyze patterns of relationships among
people. They are useful for examining the social structure and ties (for example, work patterns) of
individuals or organizations. SNA involves collecting data from multiple sources (such as surveys,
emails, blogs and other electronic artifacts), analyzing the data to identify relationships, producing
graphic visualizations and then mining it for new information, such as the quality or effectiveness of
a team's communication patterns.
Position and Adoption Speed Justification: SNA applications are used to analyze organizations
and collaborative environments for example, R&D teams, organizational units and supplier
networks. Tools can also "flip the view" to show who is interacting with what content.
These applications will increasingly be used to mine data from social media sites. In addition, they
can be used to establish perspectives on user behavior in enterprises where linkage is explicit in
communications such as email or IM. SNA is also being used in electronic-discovery (e-discovery)
and other investigative applications such as fraud detection or crime prevention. The products that
are commercially available simplify the creation of network diagrams by using survey data, as well
as creating network visualizations based on electronic communication records. Tools that perform
analysis of the relationships, interactions and behavior of networks can be instrumental in
diagnosing a variety of workplace issues.
Historically, adoption of SNA has been hampered by the difficulty in collecting relevant and reliable
networking data. It has also been hampered by the perception that this type of analysis is highly
conceptual and the information collected difficult to translate into practical actions. Recently, more
vendors are incorporating SNA concepts into their products (for example, graph engines and
recommendation engines) that automate data collection and demonstrate the practical uses of the

Page 36 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

analysis results. Variations of SNA tools are also making their way into collaboration platforms. They
provide practical features such as suggesting people to follow, which help improve the perception
of SNA as overly academic and so not helpful for doing real work.
User Advice: SNA can be valuable for identifying active networks in an organization that may not be
evident from looking at an organization chart. IT departments should study those teams, experiment
with them and pilot new approaches to improving communication and collaboration processes.
Surveys of social networks don't just turn up information sharing and interaction they uncover
trust networks. SNA is often performed in conjunction with a consulting effort that applies other
qualitative and quantitative research methods. SNA helps identify issues such as groups that need
to collaborate, or communication gaps. Use SNA to determine which informal communities already
exist that can be augmented, and who are the network influencers. SNA can be combined with
graph databases that represent data as a network of nodes connected by edges to readily
determine connectivity among entities.
The simplest forms of SNA might be accomplished by adding a small number of questions to an
annual HR employee survey. When previously hidden patterns of information sharing and interaction
are made explicit, they can be studied to make improvements. Additionally, SNA can be used to
target key opinion leaders and to enable more accurate dissemination of product information. It can
also be used investigatively to determine patterns of interaction that may hold clues to proving guilt,
or innocence, in legal or regulatory actions. However, users should be mindful of privacy laws and
the concerns of employees, who may feel that they are under surveillance or even being threatened.
Business Impact: SNA can be used by organizations to:

Understand the flow of information and knowledge

Identify the key knowledge brokers

Highlight opportunities for increased knowledge flow to improve performance

Companies use organizational network maps to help them manage change, facilitate mergers and
reorganizations, enhance innovation, spot talent and plan for succession. SNA can be used in the
consumer space to identify target markets, create successful project teams and identify unvoiced
conclusions. It can also be used to detect implicit connections. Some e-discovery vendors use SNA
to find patterns in interactions especially to identify additional legal custodians of data. SNA is
also gaining traction in sales organizations that see it as a means of identifying decision makers and
determining relationship strength.
As enterprises become more virtualized (with people from different organizations, in different
locations and time zones, operating under different objectives), bridging these virtual gaps becomes
a key element of collaboration initiatives. These initiatives will only succeed if there is an
understanding of the structure of informal relationships and work patterns which SNA can reveal.
SNA is also gaining momentum in industries with significant quantities of customer information to
be mined for example, telecommunications, banking and retail. In telecommunications, SNA uses
information from call distribution records (such as number dialed, incoming caller number, call

Page 37 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

count and type of call) to find out about the individual consumer and their calling circle. This basic
information can then be added to other information to provide specific datasets for different
activities, such as preventing churn or planning marketing expenditures.
Often, use of quantitative research methods and tools such as SNA are augmented with qualitative
research methods (for example, ethnographies) to provide a more informed and balanced
perspective.
Benefit Rating: High
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: Activate Networks; Gephi; Idiro Technologies; Optimice; Orgnet; SAP
InfiniteInsight (KXEN); SAS; Syndio Social; Trampoline Systems

Capacity-Planning and Management Tools


Analysis By: Ian Head; Milind Govekar
Definition: The capacity-planning and management tools enable IT to plan, manage and optimize
the use of IT infrastructure and application capacity for business and IT service life cycles and
scenarios. They go beyond trending, providing "what-if" scenario modeling, based on business and
IT data. These tools also provide a real-time view of resource capacity and continuous optimization
of resources in a physical, virtual and cloud data center. The tools provide guidance on matching
workloads to resources to optimize the data center environment.
Position and Adoption Speed Justification: We have seen growing interest in and implementation
of capacity-planning tools. These products are increasingly used for standard data center
consolidation activities, as well as the related planning and management of virtual and cloud
infrastructures. These tools are also used to match workload requirements to the most appropriate
resources in a physical, virtual or cloud data center. They provide real-time visualization of capacity
in a data center to help optimize workloads and associated resources. Some of these tools are
embracing operational analytics functionality to provide performance and capacity information for
structured and unstructured data and environments. These tools require skilled people who may be
part of a performance management group.
Capacity-planning tools provide value by enabling enterprises to build performance scenarios
(models) that relate to business demand, often by asking what-if questions, and assessing the
impact of the scenarios on various infrastructure components. Capacity also has to be managed
(capacity management) in real time in a production environment that includes on-premises or cloud
IT resources in a physical or virtual environment. This includes assessing the impact on
performance of moving distressed workloads due to lack of resources to another environment with
more resources, in real time (defragmenting the production environment).
Capacity-planning tools help plan IT support for optimal performance of business processes, based
on planned variations of demand. These tools are designed to help IT organizations achieve
Page 38 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

performance goals and plan budgets, while preventing the overprovisioning of infrastructure or the
purchasing of excessive off-premises capacity. Thus, the technology has evolved from purely a
planning perspective to provide real-time information dissemination, and control of workloads to
meet organizational performance objectives.
Increasingly, these technologies are being used to plan and manage capacity at the IT service and
business service levels, where the tools permit an increased focus on performance of the business
process and resulting business value. Although physical infrastructure and primarily componentfocused capacity-planning tools have been available for a long time, products that support
increasingly dynamic environments are not yet fully mature.
Although adequately trained personnel will still be at a premium, some of these products have
evolved to the point at which many of their functions can be performed competently by individuals
not associated with performance engineering teams, and some of the capacity-planning tools
require little human intervention at all. These tools are helpful for performance engineering teams
and should not be seen as an alternative.
User Advice: Capacity planning and management has become especially critical due to the
increase in shared infrastructures and enterprises devising strategies to implement hybrid IT
environments and where the potential for resource contention may be greater. Users should invest
in capacity-planning and management tools to lower costs and to manage the risks associated with
performance degradation and capacity shortfalls. Although some tools are easier to use and
implement than others, many can still require a high level of skill, so adequate training must be
available to maximize the utility of these products. A cautionary example would be virtual server
optimization tools, which have enormous potential, but require skillful use to avoid unexpected
performance degradation elsewhere in the infrastructure.
Finally, determine the requirements of your infrastructure and application management environment
some organizations may require only support of virtual and cloud environments, while others will
need to include support for what may still be a substantial legacy installed base.
Business Impact: Organizations in which critical business services rely heavily on IT services
should use capacity-planning and management tools to ensure high performance and minimize the
costs associated with "just in case" capacity headroom excesses. When more accurate
infrastructure investment plans and forecasts are required, these tools are essential, but they are
usually implemented successfully by organizations with high IT service management maturity and a
dedicated performance management group.
Benefit Rating: High
Market Penetration: 5% to 20% of target audience
Maturity: Early mainstream
Sample Vendors: Aria Networks; BMC Software; CA Technologies; CiRBA; Dell Software;
Sumerian; TeamQuest; Veeam; VMTurbo; VMware

Page 39 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Recommended Reading:
"Govern the Infrastructure Capacity and Performance Planning Process With These 13 Key Tasks"
"How to Create and Manage an Infrastructure Capacity and Performance Plan"
"How to Build Best-Practice Infrastructure Capacity Plans"
"Toolkit: Business and IT Operations Data for the Performance Management and Capacity Planning
Process"
"Toolkit: Server Performance Monitoring and Capacity Planning Tool RFI"

Context-Enriched Services
Analysis By: Gene Phifer
Definition: Context-enriched services are those that combine demographic, psychographic and
environmental information with other information to proactively offer enriched, situation-aware,
targeted, personalized and relevant content, functions and experiences. The term denotes services
and APIs that use information about the user and the session to optionally and implicitly fine-tune
the software action to proactively push content to the user at the moment of need, or to suggest
products and services that are most attractive to the user at a specific time.
Position and Adoption Speed Justification: Context enrichment refines the output of services and
improves their relevance. Context-enriched services have been delivered since the early days of
portals in the late 1990s. With the advent of mobile technologies, additional context attributes like
location have become highly leveraged. And now, with digital marketing, a wide array of contextual
attributes is used to target users with advertisements, offers and incentives. The most recent thrust
of context-enriched services are websites, portals and mobile apps that are consumer-facing in
mobile computing, social computing, identity controls, search and e-commerce areas in which
context is emerging as an element of competitive differentiation.
Enterprise-facing implementations, which use context information to improve productivity and
decision making by associates and business partners, have slowly begun to emerge, primarily in
offerings from small vendors (see "Context-Enhanced Performance: What, Why and How?"). While
personalization is not a new concept (portals have used a level of personalization for many years),
context-enriched services extend that model beyond portal frameworks into a multitude of Web and
mobile applications. Context-enriched services are typically delivered via proprietary approaches,
as an industry-standard context delivery architecture (CoDA) has not yet been created.
The focus on big data has created a favorable environment for the development of context-enriched
services. Many big data use cases are focused on customer experience, and organizations are
leveraging a broad range of information about an individual to hyperpersonalize the user experience,
creating greater customer intimacy and generating significant revenue lift. Examples include:

Page 40 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Walmart Its Polaris search engine utilizes social media and semantic search of clickstream
data to provide online customers with more-targeted offers (leading to a 10% reduction in
shopping cart abandonment).

VinTank This website analyzes over 1 million wine-related conversations each day, to predict
which customers will be interested in specific wines at specific price points, and combines that
with location information and alerts wineries when a customer who is likely to be interested in
their wines is nearby.

Orbitz This site utilizes behavioral information from user history and search to develop
predictive patterns that would increase hotel bookings by presenting users with hotels that
more closely match their preferences. This project resulted in the addition of 50,000 hotel
bookings per day a 2.6% increase (see "Orbitz Worldwide Uses Hadoop to Unlock the
Business Value of 'Big Data'").

The term "context-enriched services" is not popular in the industry vernacular, but its capabilities
are ubiquitous and pervasive. Context-enriched services have moved slightly forward this year,
indicating that there is room for more maturity, and more hype. The advent of new location-based
services, such as beacons, will add a new source of hype. We expect that the continued focus on
big data analytics will drive significant movement in 2016 and beyond.
User Advice: IT leaders in charge of information strategy and big data projects should leverage
contextual elements sourced both internally and externally for their customer-facing projects.
Context isn't limited to customers, however; employee-facing and partner-facing projects should
also be considered as targets for context-enriched services. In addition, investigate how you can
leverage contextual services from providers such as Google and Facebook to augment your
existing information.
Business Impact: Context-enriched services will be transformational for enterprises that are
looking to increase customer engagement and maximize revenue. In addition, context enrichment is
the next frontier for business applications, platforms and development tools. The ability to automate
the processing of context information will serve users by increasing the agility, relevance and
precision of IT services. New vendors that are likely to emerge will specialize in gathering and
injecting contextual information into business applications. New kinds of business applications
especially those driven by consumer opportunities will emerge, because the function of full
context awareness may end up being revolutionary and disruptive to established practices.
Benefit Rating: Transformational
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: Adobe; Apple; Facebook; Google; IBM; Microsoft; Oracle; SAP; Sense Networks
Recommended Reading:
"Context-Aware Computing Is the Next Big Opportunity for Mobile Marketing"

Page 41 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

"An Application Developer's Perspective on Context-Aware Computing"


"Drive Customer Intimacy Using Context-Aware Computing"
"The Future of Information Security Is Context-Aware and Adaptive"

CSP Network Intelligence


Analysis By: Kamlesh Bhatia
Definition: Network intelligence (NI) technologies allow communications service providers (CSPs) to
capture subscriber-, service- and application-level context from network resources. This information
is used to enable a better customer experience, improve operational efficiency of network assets
and create monetization opportunities for CSPs. The solutions comprise numerous data gathering,
processing and enabling functions that allow CSPs to easily integrate network information with
information from the IT and customer interaction layers.
Position and Adoption Speed Justification: The adoption of NI solutions is an extension of the
existing capability in CSP networks to tap contextual information about customers and services.
Traditionally, CSPs have relied on a custom-made capability in the form of tools for detecting traffic
flow patterns and metadata information in data streams. The functionality was largely embedded in
network equipment on the edge of the network or as part of a service with other asset management
software. This fragmentation in approach to NI meant that the collective output was largely
incoherent or not timely enough to create meaningful impact in terms of operations and
monetization opportunities.
As CSPs migrate toward digital networks and aim to offer and support an increasing variety of
connected digital services, there is a need to improve accuracy and speed of information gathering.
The eventual blurring of lines between traditional IT and networks means that CSP information
architectures should extend into the network realm with the use of analytics and real-time business
intelligence capabilities to enhance the richness, visibility and applicability of this information.
This ability to "connect the dots" between customer information in the network and real-time
service and application usage, as well as enhance and expose this information internally, is the main
goal of NI technologies. This should be achieved through the use of services-based integration
between data assets, thus resulting in reduction of cost and effort.
The use of NI technologies as a functional platform can form the middleware between the network
and the application layer, enhancing applications by making them more event-driven and contextaware. Examples include, but are not limited to, billing, policy management, customer life cycle
management, and revenue and service assurance.
Adoption of NI technologies among CSPs has been slower than expected. This is largely a result of
the internal organizational challenges of bridging the network-IT divide. Early implementations of NI
technologies have largely focused on integration with network-based billing and policy management
applications, with embedded analytics and reporting capabilities for subscriber profiling and
demand forecasting. New use cases for blending NI with content optimization and delivery to
enhance customer experience are emerging. Investment in technologies like software-defined
Page 42 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

networking (SDN) may create additional impetus for "NI as information middleware" as the lines
between the network and IT eventually blend.
User Advice:

Evaluate bandwidth consumption patterns and influence of enhanced subscriber and


application awareness on new product development to make a business case for an NI
solution.

Consider building a robust NI capability that integrates with applications such as policy
management, charging, CRM and subscriber-data management to address aspects of the
revenue and customer life cycles.

Work with vendors that offer standardization in their approach and a high level of modularity.

View NI as an extension of the larger information management architecture that includes a


traditional business intelligence and analytics capability in the organization. Some vendors have
now started to embrace this coherent network and IT concept in their latest offerings for
customer experience management solutions.

Business Impact: Granular understanding of subscriber and network behavior is vital to monetize
data and new content-based services. Enhancing the functional capability of back-office
applications by adding subscriber and network awareness can help CSPs to improve customer
experience, create upsell opportunities and optimize resource consumption.
Benefit Rating: Moderate
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: Alcatel-Lucent; Allot Communications; Amdocs; Cisco; ipoque; Niometrics;
Oracle; Qosmos; Sandvine
Recommended Reading:
"Market Insight: Calculating the Value of CSP Customer Data"
"Survey Analysis: CSPs' Top Priorities for Extracting Value From Customer Data, Today and in the
Future"
"Market Trends: Linking Network and IT Intelligence to Drive Context-Aware Customer Experience"

Sliding Into the Trough


4G Standard
Analysis By: Sylvain Fabre

Page 43 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Definition: The 4G Long Term Evolution Advanced (LTE-A) standard is for next-generation localand wide-area mobile platforms. It supports high peak data rates; handover between wireless
bearer technologies; Internet Protocol (IP) core and radio transport networks for voice, video and
data services; and support for call control and signaling. With peak data rates of 100 Mbps in WANs
and 1 Gbps in fixed or low-mobility situations, and all-IP core, radio access and transport networks,
4G will be mostly implemented as LTE-A.
Position and Adoption Speed Justification: As of May 2014, there were seven commercial LTE-A
deployments worldwide (according to the Global Mobile Suppliers Association). Network
architecture includes all IP, low-latency, flat architecture and the integration of femtocells and
picocells (as the "small-cell" layer) within the macrolayer, using the heterogeneous network (HetNet)
and orthogonal frequency division multiplexing, software-defined radio and multiple input/multiple
output. The radio access network will be made up of more cells of decreasing size, to enable
increased frequency reuse for maximum performance. Key technical aspects in LTE-A include
carrier aggregation and relay capabilities for mesh-like topologies, allowing better coverage and
capacity.
There are reasons why LTE-A introduction is still in its early stages. Deployments of High-Speed
Downlink Packet Access (HSDPA), High-Speed Uplink Packet Access (HSUPA) and LTE technology
will extend the life of 3G infrastructures for voice and, to some extent, data.
Additionally, network operators want to receive a worthwhile return on 3G and LTE investments
before moving to 4G. There is also the problem of how to provide adequate yet cost-effective
backhaul capacity; this was already difficult with the higher data rates supported by High-Speed
Packet Access (HSPA), and these difficulties remain with LTE and then 4G. At this point, it appears
that LTE-A is a clear leader for 4G with 802.16m (WiMAX 2) a possible but very distant
contender.
WiMAX was considered but ultimately dismissed; in the U.S., Sprint had begun advertising WiMAX
as 4G, but instead began commercial LTE availability in 2013. Worldwide, WiMAX is losing
momentum to Time Division-LTE (TD-LTE).
User Advice:

Prepare for business impact areas for 4G: high-speed, low-latency communications, multiple
"pervasive" networks and interoperable systems.

Recognize the cost of deploying and operating an entirely new 4G network. Consider different
network business models for example, increased use of network sharing and small cells.
Leverage alternative spectrum for example, Verizon has launched XLTE: LTE over Advanced
Wireless Services spectrum.

Business Impact:

Early features of LTE-A such as carrier aggregation are being rolled out in some operators'
networks. However, continue monitoring the deployment and success of 3G enhancements
such as HSDPA, HSUPA, HSPA Evolution (HSPA+) and LTE, as these need to provide a

Page 44 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

worthwhile ROI before network operators will commit themselves to a new generation of
technology.

Ensure interoperability with today's networks, because backward-compatibility might otherwise


become an issue (as interworking between wideband code division multiple access and LTE
networks is proving to be). A lot of confusion remains due to the carriers' inconsistency in their
use of the term "4G." 3G technologies have been labeled by communications service providers
interchangeably. For example, 3G has been referred to as 4G for "HSPA+" by T-Mobile and
AT&T, and also for "WiMAX" by Sprint; and then the term "LTE" was used by Verizon all in
the same U.S. market. AT&T then began deploying LTE-A carrier aggregation in 2014 in a small
number of markets.

Benefit Rating: High


Market Penetration: Less than 1% of target audience
Maturity: Emerging
Sample Vendors: Alcatel-Lucent; Ericsson; Fujitsu; Huawei; NEC; Nokia; Samsung Electronics; ZTE
Recommended Reading: "Magic Quadrant for LTE Network Infrastructure"

Big Data
Analysis By: Mark A. Beyer
Definition: Big data is high-volume, velocity and variety information assets that demand costeffective, innovative forms of information processing for enhanced insight and decision making.
Position and Adoption Speed Justification: Big data has crossed the Peak of Inflated
Expectations. There is considerable debate about this, but when the available choices for a
technology or practice start to be refined, and when winners and losers start to be picked, the worst
of the hype is over.
It is likely that big data management and analysis approaches will be incorporated into a variety of
existing solutions, while simultaneously replacing some of the functionality in existing market
solutions (see "Big Data Drives Rapid Changes in Infrastructure and $232 Billion in IT Spending
Through 2016"). The market is settling into a more reasonable approach in which new technologies
and practices are additive to existing solutions and creating hybrid approaches when combined
with traditional solutions.
Big data's passage through the Trough of Disillusionment will be fast and brutal:

Tools and techniques are being adopted before expertise is available, and before they are
mature and optimized, which is creating confusion. This will result in the demise of some
solutions and complete revisions of some implementations over the next three years. This is the
very definition of the Trough of Disillusionment.

Page 45 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

New entrants into this practice area will create new, short-lived surges in hype.

A series of standard use cases will continue to emerge. When expectations are set properly, it
becomes easier to measure the success of any practice, but also to identify failure.

Some big data technologies represent a great leap forward in processing management. This is
especially relevant to datasets that are narrow but contain many records, such as those associated
with operational technologies, sensors, medical devices and mobile devices. Big data approaches
to analyzing data from these technologies have the potential to enable big data solutions to
overtake existing technology solutions when the demand emerges to access, read, present or
analyze any data. However, inadequate attempts to address other big data assets, such as images,
video, sound and even three-dimensional object models, persist.
The larger context of big data is framed by the wide variety, and extreme size and number, of data
creation venues in the 21st century. Gartner clients have made it clear that big data technologies
must be able to process large volumes of data in streams, as well as in batches, and that they need
an extensible service framework to deploy data processes (or bring data to those processes) that
encompasses more than one variety of asset (for example, not just tabular, streamed or textual
data).
It is important to recognize that different aspects and varieties of big data have been around for
more than a decade it is only recent market hype about legitimate new techniques and solutions
that has created this heightened demand.
Big data technologies can serve as unstructured data parsing tools that prepare data for data
integration efforts that combine big data assets with traditional assets (effectively the first-stage
transformation of unstructured data).
User Advice:

Focus on creating a collective skill base. Specifically, skills in business process modeling,
information architecture, statistical theory, data governance and semantic expression are
required to obtain full value from big data solutions. These skills can be assembled in a data
science lab or delivered via a highly qualified individual trained in most or all of these areas.

Begin using Hadoop connectors in traditional technology and experiment with combining
traditional and big data assets in analytics and business intelligence. Focus on this type of
infrastructure solution, rather than building separate environments that are joined at the level of
analyst user tools.

Review existing information assets that were previously beyond analytic or processing
capabilities ("dark data"), and determine if they have untapped value to the business. If they
have, make them the first, or an early, target of a pilot project as part of your big data strategy.

Plan on using scalable information management resources, whether public cloud, private cloud
or resource allocation (commissioning and decommissioning of infrastructure), or some other
strategy. Don't forget that this is not just a storage and access issue. Complex, multilevel,
highly correlated information processing will demand elasticity in compute resources, similar to
the elasticity required for storage/persistence.

Page 46 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Small and midsize businesses should address variety issues ahead of volume issues when
approaching big data, as variety issues demand more specialized skills and tools.

Business Impact: Use cases have begun to bring focus to big data technology and deployment
practices. Big data technology creates a new cost model that has challenged that of the data
warehouse appliance. It demands a multitiered approach to both analytic processing (many
context-related schemas-on-read, depending on the use case) and storage (the movement of "cold"
data out of the warehouse). This resulted in a slowdown in the data warehouse appliance market
while organizations adjusted to the use of newly recovered capacity (suspending further costs on
the warehouse platform) and moving appropriate processing from a schema-on-write approach to a
schema-on-read approach.
In essence, the technical term "schema on read" means that if business users disagree about how
an information source should be used, they can have multiple transformations appear right next to
each other. This means that implementers can do "late binding," which in turn means that users can
see the data in raw form, determine multiple candidates for reading that data, determine the top
contenders, and then decide when it is appropriate to compromise on the most common use of
data and to load it into the warehouse after the contenders "fight it out." This approach also
provides the opportunity to have a compromise representation of data stored in a repository, while
alternative representations of data can rise and fall in use based on relevance and variance in the
analytic models.
Benefit Rating: Transformational
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: Cloudera; EMC; Hortonworks; IBM; MapR; Teradata
Recommended Reading:
"Big Data Drives Rapid Changes in Infrastructure and $232 Billion in IT Spending Through 2016"
"Big Data' Is Only the Beginning of Extreme Information Management"
"How to Choose the Right Apache Hadoop Distribution"
"The Importance of 'Big Data': A Definition"

Network Function Virtualization


Analysis By: Akshay K. Sharma; Deborah Kish
Definition: Network function virtualization (NFV) is the virtualization of network equipment functions,
which typically run on dedicated appliances, to now run on industry-standard servers, switches and
storage devices with the aim of lowering costs, improving efficiency and increasing agility, via
hypervisor technologies. Standards are being created by the European Telecommunications

Page 47 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Standards Institute, including major telecom equipment vendors and communications service
providers (CSPs), with involvement by the Open Networking Foundation.
Position and Adoption Speed Justification: Alcatel-Lucent, Cisco, Juniper Networks, Oracle, HP,
Ericsson, Metaswitch, Brocade, Sonus, Genband and many other equipment vendors have
announced initial network virtualization capabilities running on commercial off-the-shelf (COTS)
hardware.
Although porting of network systems to hypervisors running on dedicated servers has been done
before with softswitches and the mobile Evolved Packet Core functions of Long Term Evolution
networks on Advanced Telecommunications Computing Architecture (ATCA) servers doing so on
server farms presents significant technical challenges to NFV. These challenges relate to the
virtualization of hundreds of networking functions in a single server farm, orchestrating this with the
correct analytics, distributing high-performance network bandwidth to the virtualized functions, and
maintaining carrier-grade resiliency with policy and back-office aspects all intertwined.
A newer theme is CSP-managed virtualized customer premises equipment (CPE) that can be hosted
as a service within data centers. Enterprise-hosted appliances such as session border
controllers, firewalls, load balancers, Internet Protocol (IP) PBXs, interactive voice response, content
delivery networks and others will soon be offered as a service via NFV, and will be dynamically
orchestrated as needed. The operations support system (OSS)/business support system (BSS)
stack will need to be addressed to ensure provision, inventory management, metering and charging
for the consumption and utilization of these nontraditional resources, among the other capabilities.
User Advice:

Consider NFV in the context of a CSP's overall network strategies, and monitor announcements
from equipment vendors.

Plan for major changes in OSS and BSS aspects, including additional care for security when
orchestrating functions within the data center.

Consider offering data center solutions based on network virtualization services such as
security as a service and virtualized CPE offerings.

Business Impact: The business case for this emerging technology has yet to be proven for data
center CIOs and CSP CTOs. They require de facto standards, full interoperability and use cases that
are proven in the field. However, operational efficiencies, quicker time to market for newer
applications, and newer revenue share business models may result from this technology.
Alternative trends in cloud computing architectures are emerging, with the advent of vast
"megaplexes" of massively parallel server farms in huge data centers. These megaplexes comprise
application servers typically running on 1-Gigabit to 10-Gigabit Ethernet general-purpose
computers with high storage capacity and high-performance processing power. An emerging trend
is to harness this power to enable network applications to run on hypervisors in sync with network
demand by tying NFV to software-defined networking (SDN). This is the goal of vendors such as
Cisco, Alcatel-Lucent, Juniper Networks, HP and Ericsson.

Page 48 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Although vendors such as VMware have virtualized storage or applications that can be deployed
across data centers, network connectivity between data centers is typically "fixed." Network
virtualization is a concept whereby network connectivity becomes virtual that is, router-, port-,
speed-, latency- and resiliency-agnostic which is also a goal of SDN. NFV augments this concept
by offloading network functions to COTS servers, creating potential for an on-demand model of
network services that relies on real-time policy and charging.
Network functions could run on server farms that are able to be "elastic" in relation to usage. This
could have additional benefits:

Energy and space savings from converged network elements and an elastic network, rather
than speed-specific, nailed-in circuits connected to dedicated ports and routers

The ability to introduce new service networks without new overlays

Reduced total cost of ownership

Risk mitigation

Streamlined asset utilization

Support for new business models, with on-demand, real-time charging

Improved profitability

Benefit Rating: Transformational


Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: Alcatel-Lucent; Cisco; Ericsson; HP; Juniper Networks
Recommended Reading: "SDN and NFV Offer CSPs a New Way to Do Business"

Mobile Self-Organizing Networks


Analysis By: Sylvain Fabre
Definition: A self-organizing network (SON) is a framework that automates some planning functions
as well as maintenance functions for mobile networks. It resides partly in the operations support
system framework and partly in the radio access network (RAN) eNodeB. In practice, a SON is
introduced gradually as an adjunct to human operators for planning, alarms management and
maintenance, self-configuration, and self-optimization for the mobile network, to help
communications service providers (CSPs) manage the RAN more easily.
Position and Adoption Speed Justification: SONs are a key issue for many CSPs and have been
a key requirement for next-generation mobile networks. Extra SON requirements have been defined
in the Third Generation Partnership Project (3GPP) and next-generation mobile networks (NGMNs).
The rationale behind SON functionality is that second-generation, third-generation and Long Term

Page 49 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Evolution (LTE) fourth-generation networks will coexist for some time. For that reason, costoptimized, automated operations, maintenance and planning are necessary because three different
radio networks (2G, 3G and 4G) will still need to be managed and optimized simultaneously.
A SON can help achieve stable LTE networks for operators (particularly in the early stages) and
allow automation of maintenance tasks in the longer term. It is likely that small cells will play a
greater role in broad-scale LTE. Primarily, the SON is a key feature of femtocells. The increasing use
of small cells will mean that a much larger quantity of sites will need to be managed, hence the
need for the automation that SONs enable. Interface specifications and use cases for SONs have
been defined and are part of 3GPP specification TS 32.500 in release 8. Leading operators are
implementing SONs.
User Advice:

Impose requirements on vendors for valued features and evaluate the cost of these features
based on estimated savings in operations, administration and maintenance, as well as network
planning and head count.

Push vendors to integrate network elements into SON beyond the RAN. For example,
incorporating the core network should become a logical requirement for SON vendors.

Require multivendor, multitechnology (including Wi-Fi) support. Most vendors propose features
that show elements as black boxes and give control of the SON to the vendor, while operators
would prefer this to be interworking between different vendors and administered by the
operator. CSPs should make requests for quotations clear in terms of their interoperability
requirements for SON and LTE when dealing with multivendor scenarios.

Request a SON road map to include 2G and 3G networks, over time adding Wi-Fi and small cell
layers in a heterogeneous network (HetNet).

Business Impact:

SONs can provide significant savings in operations and greatly automate the way future
wireless networks are set up, managed and planned.

SONs are integral to some advanced RAN architectures, such as cloud-based RANs.

SON approaches have also been applied to managing mobile core networks.

New approaches such as machine learning and artificial intelligence can help the SON add
more value to network quality and customer experience.

SON and service assurance solutions should move to a closer integration.

Finally, a tighter integration of the business support system (BSS) and information about the
value of the user needs to increasingly drive how resources get optimized by the SON system.

Benefit Rating: High


Market Penetration: 1% to 5% of target audience

Page 50 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Maturity: Adolescent
Sample Vendors: Actix; Alcatel-Lucent; Cellwize; Cisco; Ericsson; Huawei; NEC; Nokia; ZTE
Recommended Reading:
"Magic Quadrant for LTE Network Infrastructure"
"Emerging Technology Analysis: Self-Organizing Networks, Hype Cycle for Wireless Networking
Infrastructure"
"Hype Cycle for Wireless Networking Infrastructure, 2013"
"Research Roundup for Mobile Networks, 4G/LTE, 5G and Small Cells"

Voice Over LTE


Analysis By: Sylvain Fabre
Definition: Voice as an application can be run as an all-Internet Protocol (IP) end-to-end service on
LTE networks. It requires the introduction of an IP Multimedia Subsystem (IMS) core for support of
the service over IP. Advantages include very fast call set up and the possibility to include voice in
IMS and, later, non-IMS applications.
Position and Adoption Speed Justification: There are many issues to resolve for voice over LTE
(VoLTE), which explains why, out of over 288 commercial LTE networks worldwide, there were only
three commercial VoLTE offerings launched as of May 2014 all within South Korea, and one soft
launch by CSL in Hong Kong. A further 28 communications service providers (CSPs) are currently
deploying VoLTE, including China Mobile, SoftBank, Sprint and Verizon (according to the Global
Mobile Suppliers Association).
Some notable leading operators intend to launch VoLTE commercially in 2014, such as NTT
Docomo targeting this summer, followed by the other Japanese CSPs such as KDDI au. In addition
to these, AT&T in the U.S. launched VoLTE and branded it "high-definition voice" in some initial
markets in May 2014.
For the time being, VoLTE seems trapped within a circle of issues, starting with there being little
point in rolling it out until there is countrywide LTE coverage and a need to refarm 2G spectrum (or
3G in the U.S.). Until that happens, CSPs will just use 2G or 3G for voice with Circuit Switch Fall
Back (CSFB). Wide scale roaming agreements are not in place and there are technical issues with
LTE to meet regulatory requirements for a voice service (for emergency services, for example); this
means that CSPs would not have an incentive to refarm 2G spectrum and justify capex on a
nationwide VoLTE service.
There are only about 24 bilateral roaming agreements in place so far for LTE. The supporting IP
eXchange (IPX) infrastructure, required to connect the Diameter signaling between countries, needs
also to catch up to allow for the needed Diameter messaging. It took 15 years to get the general

Page 51 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

packet radio service roaming eXchange (GRX) infrastructure, which handles signaling between
operators for 2G/3G roaming situations, to where we are now about 200 countries covered for
2G/3G roaming. It will not be as long for LTE/VoLTE, but is still several years away this explains
the hesitation from many CSPs.
In roaming situations, IMS interoperability between different IMS vendors (for example, Samsung
and Qualcomm protocol stacks) may be a challenge. Finally, charging principles between CSPs and
transit signaling networks (that is, the IPX) need to be ironed out.
On a regulatory note, introducing VoLTE would generate demand from government/law
enforcement for legal interception facilities. Usually such mandates are an extra cost for CSPs, but
not a revenue generator.
While there are many, nontrivial technical issues at play, the main roadblock is the business model.
A leading operator CTO commented that the issue with VoLTE is that there is no business case.
This may be an extreme point of view but it will take several Tier 1 CSPs taking the lead to show
successful monetization.
As the first use of QoS Class Identifier (QCI) for mobile QoS for LTE will be VoLTE (IP voice), we
expect VoLTE and QCI to mature at comparable rates.
User Advice:

Plan the network evolution alongside your service road map: MMTel is needed to provide Multi
Media Telephony, with presence, video calling, etc., and software upgrades in the radio access
network and evolved packet core are also required for VoLTE support.

Differentiate CSP VoIP so that the branded VoLTE stands out clearly from over-the-top
alternatives such as HD-quality VoLTE Viber, for example.

Business Impact:

As voice and SMS are decreasing revenue sources for CSPs, data is starting to take over in
many places. Unlimited voice is common on consumer contracts around Europe. In that
context, HD voice quality would be of more interest.

Another issue that is partly technology-, partly service-related, is migrating users away from
legacy 2G/3G onto the 4G platform. This can be one of the drivers for going to VoLTE not
just as a new service but simply as a replacement for the legacy circuit-switched voice being
retired for example, as the old network is phased out and spectrum refarmed.

VoLTE has high-quality sound and noticeably shorter call set up (from 7.5 seconds to 2.6
seconds, as demonstrated by NTT Docomo). It also allows video calling.

Benefit Rating: Moderate


Market Penetration: Less than 1% of target audience
Maturity: Adolescent

Page 52 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Sample Vendors: Alcatel-Lucent; Cisco; Ericsson; NEC; Nokia; Samsung; ZTE

Cloud-Based RAN
Analysis By: Akshay K. Sharma
Definition: Gartner defines cloud-based radio access network (RAN) as a new architecture for the
mobile RAN that combines several key attributes, with some variations according to the
requirements of individual vendors. The cloud aspect refers to the fact that parts of the architecture
for the base station are located in the cloud typically the control elements, such as the base
station controller, radio network controller and mobility management entity where Wi-Fi control for
Hotspot 2.0 is emerging as a need.
Position and Adoption Speed Justification: So far, the main proponents of this architecture are
Alcatel-Lucent with its lightRadio concept and Nokia with its Liquid Radio but new solutions are
emerging from Cisco (via its acquisition of Ubiquisys' small-cell technology and cloud-based
solutions from Quantum), and Ericsson (through its recent DOT small-cell announcement and
Cloud-RAN architecture). Several more leading vendors will have to present products in order for
this segment to be considered mature and stable enough to gain wider adoption by
communications service providers (CSPs).
Alcatel-Lucent's lightRadio and Nokia's Liquid Radio solutions are being used by China Mobile and
Sprint as well as other CSPs. Both vendors are claiming total cost of ownership (TCO) benefits and
better use of processing resources as the intelligence of the network can be pooled and then
applied wherever it is needed at any given moment. Cloud-based RAN is currently niche and there
are debates about how it should be deployed and managed. These issues influence the position on
the Hype Cycle.
User Advice: Consider this architecture for areas where new site acquisition is an issue due to the
cost or local regulations, or where current sites no longer allow larger RAN equipment to be
installed. CSPs can cut costs in two ways. First, the reduced footprint of offloading functions to the
cloud can reduce site costs. Second, resources can be used more efficiently. Generally, parts of the
architecture of the base transceiver station can be pooled and shared between several sites using
baseband processing, for example, in the cloud. The use of self-organizing networks also allows for
unified network management with a mix of cell sizes, from macrocells to small cells, as is typical of
a heterogeneous network.
Business Impact: For CSPs, this concept provides a distributed architecture where components
can be placed in different locations, and capacity can be dynamically allocated where it is most
needed. Significant cost savings are advanced by vendors in this space, with Alcatel-Lucent, for
example, claiming that in the context of Long Term Evolution and small cells, its lightRadio concept
could reduce TCO by over 50%, when compared to a legacy single-RAN architecture. The very
small form factor with system-on-chip components means that the on-site antenna equipment can
be so small that it is no longer visible.

Page 53 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Future applications such as Wi-Fi control for Hotspot 2.0 are emerging within cloud-based
offerings, as well as localized caching, and location-based service applications aggregating
signaling from such platforms. As network functions become virtualized under network functions
virtualization, watch for further radio applications within cloud-based systems, including controllers
providing seamless roaming, as well mobile video optimization functions.
Benefit Rating: High
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: Alcatel-Lucent; Cisco; Ericsson; Nokia
Recommended Reading:
"Magic Quadrant for LTE Network Infrastructure"
"SDN and NFV Offer CSPs a New Way to Do Business"

Hybrid Cloud Computing


Analysis By: David W. Cearley; Donna Scott
Definition: Gartner defines hybrid cloud computing as the coordinated use of cloud services across
isolation and provider boundaries among public, private and community service providers, or
between internal and external cloud services. Like a cloud computing service, a hybrid cloud
computing service is scalable, has elastic IT-enabled capabilities, self-service interfaces and is
delivered as a shared service to customers using Internet technologies. However, a hybrid cloud
service crosses isolation and provider boundaries.
Position and Adoption Speed Justification: Hybrid cloud computing is the coordinated use of
cloud services across isolation and provider boundaries among public, private and community
service providers, or between internal and external cloud services. Hybrid cloud computing does
not refer to using internal systems and external cloud-based services in a disconnected or loosely
connected fashion. Rather, it implies significant integration or coordination between the internal and
external environments at the data, process, management or security layers.
Virtually all enterprises have a desire to augment internal IT systems with those of cloud services for
various reasons, including for capacity, financial optimization and improved service quality. Hybrid
cloud computing can take a number of forms. The following approaches can be used individually or
in combination to support a hybrid cloud computing approach within and across the various layers
for example, infrastructure as a service (IaaS), platform as a service (PaaS) and software as a
service (SaaS):

Joint security and management Security and/or management processes and tools are
applied to the creation and operation of internal systems and external cloud services.

Page 54 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Workload/service placement and runtime optimization Using data center policies to drive
placement decisions to resources located internally or externally, as well as balancing resources
to meet SLAs, such as for availability and response time.

Cloudbursting Dynamically scaling out an application from an internal, private cloud


platform to an external public cloud service based on the need for additional resources.

Development/test/release Coordinating and automating development, testing and release


to production across private, public and community clouds.

Availability/disaster recovery (DR)/recovery Coordinating and automating synchronization,


failover and recovery between IT services running across private, public and/or community
clouds.

Cloud service composition Creating a solution with a portion running on internal systems,
and another delivered from the external cloud environment in which there are ongoing data
exchanges and process coordination between the internal and external environments.

Dynamic cloud execution The most ambitious form of hybrid cloud computing combines
joint security and management, cloudbursting and cloud service compositions. In this model, a
solution is defined as a series of services that can run in whole or in part on an internal private
cloud platform or on a number of external cloud platforms, in which the software execution
(internal and external) is dynamically determined based on changing technical (for example,
performance), financial (for example, cost of internal versus external resources) and business
(for example, regulatory requirements and policies) conditions.

We estimate no more than 20% of large enterprises have implemented hybrid cloud computing
beyond simple integration of applications or services. This declines to 10% to 15% for midsize
enterprises, which mostly are implementing the availability/disaster recovery use case. Most
companies will use some form of hybrid cloud computing during the next three years. Some
organizations are implementing cloud management platforms (CMPs) to drive policy-based
placement and management of services internally or externally.
A fairly common use case is in the high availability (HA)/DR arena where data is synchronized from
private to public or public to private for the purposes of resiliency or recovery. A less common but
growing use case (due to complexities of networking and latency) is cloudbursting. The grid
computing world already supports hybrid models executing across internal and external resources,
and these are increasingly being applied to cloud computing. More sophisticated, integrated
solutions and dynamic execution interest users, but are beyond the current state of the art.
Positioning has advanced significantly in a year (from peak to postpeak midpoint) as organizations
leverage and embrace the public cloud into their business processes and internal services, and
engage in designing cloud-native and optimized services. While maturing rapidly, the reality is that
hybrid cloud computing is a fairly immature area with significant complexity in setting it up in
operational form. Early implementations are typically between private and public clouds, and not
often between two different public cloud providers.

Page 55 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Technologies that are used to manage hybrid cloud computing include CMPs, but also specific
services supplied by external cloud and technology providers that enable movement and
management across internal and external cloud resources. Most hybrid cloud computing
technologies and services seek to lock in customers to their respective technologies and services,
as there are no standard industry approaches.
User Advice: When using public cloud computing services, establish security, management and
governance models to coordinate the use of these external services with internal applications and
services. Where public cloud application services or custom applications running on public cloud
infrastructures are used, establish guidelines and standards for how these elements will combine
with internal systems to form a hybrid environment. Approach sophisticated integrated solutions,
cloudbursting and dynamic execution cautiously, because these are the least mature and most
problematic hybrid approaches.
To encourage experimentation and cost savings, and to prevent inappropriately risky
implementations, create guidelines/policies on the appropriate use of the different hybrid cloud
models. Consider implementing your policies in CMPs, which implement and enforce policies
related to cloud services.
Business Impact: Hybrid cloud computing leads the way toward a unified cloud computing model,
in which there is a single cloud that is made up of multiple sets of cloud facilities and resources
(internal or external) that can be used, as needed, based on changing business requirements. This
ideal approach would offer the best-possible economic model and maximum agility. It also sets the
stage for new ways for enterprises to work with suppliers and partners (B2B), and customers
(business-to-consumer), as these constituencies also move toward a hybrid cloud computing
model. In the meantime, less ambitious hybrid cloud approaches still allow for cost optimization,
flexible application deployment options, and a coordinated use of internal and external resources.
Benefit Rating: Transformational
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: Apache CloudStack; BMC Software; HP; IBM; Microsoft; OpenStack; RightScale;
VMware
Recommended Reading:
"Hybrid Cloud Network Architectures"
"Hybrid Cloud Is Driving the Shift From Control to Coordination"
"Cloud Storage Gateways: Enabling the Hybrid Cloud"

Software-Defined Networks
Analysis By: Mark Fabbi; Joe Skorupa

Page 56 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Definition: Software-defined networking (SDN) is an approach to designing, building and operating


networks that delivers business agility, and lowers capex and opex. SDN solutions include an
abstraction of network topology that allows a single control point for the network and a centralized
controller that uses one or more device control protocols to communicate with the infrastructure.
There is also an open northbound API that allows for external programmatic control and decoupling
of features and network applications from the underlying network hardware.
Position and Adoption Speed Justification: SDN is the most talked-about technology in the
networking market today and represents a potential transformation in how the market will design,
build, operate and procure network hardware and software. However, the technology is still
nascent, with many complex vendor and organizational dynamics, and there is much standards
work still to be done. A variety of both established vendors and startups are delivering early-stage
SDN-based solutions, but full, robust, end-to-end enterprise-ready solutions are just starting to
appear in the market.
User Advice: Don't get swept up in the massive hype surrounding SDN. The most important action
to take now is to develop cross-functional collaboration and investigate DevOps methodologies to
better integrate server and virtualization, networking and application teams. These teams can then
help identify key use cases both short-term ones like network agility, and also longer-term goals
to leverage an infrastructure where hardware and software features are decoupled.
In the short term, organizations are encouraged to start with non-mission-critical areas such as
testing and development to get experience with new technologies and approaches to network
design and operations. It is important to allocate time and resources to evaluate SDN technologies
and all vendors both incumbent and non-incumbent because SDN can have a fundamental
impact on vendor relationships and business models in networking and related markets.
Business Impact: SDN can increase network agility, simplify management and lead to the
reduction of operational and capital costs, while fostering long-term innovation. The adoption of
SDN has the potential to eliminate the human middleware problem that has plagued network
operations for the past two decades. By bringing network operations into more streamlined and
automated operational processes that are common in virtual environments, user organizations can
bring application deployments in line with the increasing speed of business.
As SDN matures, a significant boost in network innovation may occur as network features and
applications become decoupled from the underlying network hardware. New markets will emerge,
especially for SDN applications that have the potential to completely change physical network
deployment and operational models for enterprise networks. New competitive environments will
also evolve to change the networking landscape and financial models of enterprise networks.
Benefit Rating: High
Market Penetration: Less than 1% of target audience
Maturity: Emerging

Page 57 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Sample Vendors: Alcatel-Lucent (Nuage); Arista Networks; Big Switch Networks; Brocade; Dell;
Extreme Networks; HP; Juniper Networks; NEC; VMware
Recommended Reading:
"Ending the Confusion About Software-Defined Networking: A Taxonomy"

Innovation Management
Analysis By: Neil Osmond; Gyanee Dewnarain
Definition: Innovation management is a business discipline that aims to instill a repeatable and
sustainable innovation process or culture within an organization. Innovation is defined here as the
implementation of ideas that create business value.
Position and Adoption Speed Justification: As with other strategic business disciplines, new
trends within innovation management emerge periodically as the discipline evolves. In addition to
traditional, centralized innovation teams with dedicated resources, a particular emphasis of recent
corporate innovation activities has been implementing mechanisms to foster innovation.
For communications service providers (CSPs), innovation management can incorporate a number of
activities, such as those involving startup incubators and accelerators, research and development
(R&D) labs, venture capital (including joint ventures and acquisitions), ecosystem enablement, and
crowdsourcing.
Startup Incubators, Accelerators and Excellence Centers
The key purpose of startup incubators and accelerators is to identify, nurture and commercialize
innovative technologies and services from startups. The key point here is timing, whether in relation
to "fast pitch" sessions to identify projects with potential, the onboarding process, launches, or
decisions to continue or terminate a project.
Our analysis of CSPs indicates two distinct approaches.
AT&T's Foundry innovation centers, present in the major innovation hubs of Tel Aviv in Israel, and
Palo Alto and Plano in the U.S., offer startups an opportunity to come and pitch their ideas. If they
are successful, AT&T helps them bring their ideas to market. However, there are no expectations in
terms of potential investment in such startups by AT&T or that AT&T will later acquire them.
Telefonica's Wayra and Telstra's muru-D take a different approach by acting as incubation center
for local startups outside the major innovation hubs.
Employee and Consumer Crowdsourcing
Another approach adopted by AT&T is to enable employees to rate and vote for their favorite
projects. Selected projects benefit from "angel funding," and associated employees are
remunerated through trophies, bonuses and executive recognition.

Page 58 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

NTT Docomo, by contrast, is seeking ideas and feedback from its customers. It offers them handson experience of services and products that are in development and seeks their feedback. This
helps NTT Docomo determine whether these services and products are likely to prove popular with
customers.
R&D Centers and Labs
R&D centers and labs are CSPs' internal innovation units. They are perhaps closest to the traditional
way in which CSPs used to conduct R&D, though their nature tends to be longer-term. However,
many CSPs have revamped their innovation centers and are focusing more of their efforts on
capabilities that can add value to their multiple digital initiatives.
Ecosystem Enablement (Developer Engagement)
This approach involves hosting a community of developers by providing them with tools, APIs and
resources to help with app or service development, testing, submission, discovery, distribution and
monetization. SK Planet, SingTel, Telefonica and AT&T are examples of CSPs with ecosystem
enablement programs. However, high costs and low returns on innovation investment have
prompted some CSPs to move away from this approach.
Venture Capital
A number of CSPs have a venture capital arm that scans the market for innovative startups in which
to invest. Examples are Telefonica Ventures, Docomo Capital, Docomo Innovation Fund, Telstra
Ventures and SingTel's four-pronged venture capital and seed program comprising SingTel Innov8,
Optus-Innov8 Seed, KickStart and AIS. The level of investment can vary, from minority equity holder
and joint venture to complete acquisition.
User Advice: Any decision to launch an innovation management initiative should be driven by a
conscious recognition by a CSP's executives that it needs to improve its ability to identify and seize
higher-risk opportunities. To succeed, innovation programs require assigned resources and staff
(ideally full-time), commitment from executives, funding to incubate ideas, and a willingness to take
risks through "fail fast" approaches. CSP executives should:

Assess their innovation program against Gartner's Innovation Management Maturity Model to
identify their current level of innovation performance, and check whether it matches their target
level (see "A Maturity Model for Innovation Management").

Focus innovation programs on growth opportunities, which are typically not addressed by
bottom-up, incremental innovation activities.

Make a concerted effort to be seen as good innovation partners for external companies to work
with. The partner management processes must be well thought-through.

Capitalize on their organization's existing strengths, such as in billing, customer services and
service quality, when pursuing innovation.

Page 59 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Business Impact:
Risk: Low.
Technology intensity: Low.

Strategic policy change: Moderate explicit commitment by executives to increased risk in


exchange for meaningful innovation.

Organizational change: High it can be difficult to move from an engineering mindset largely
reliant on industry standards bodies and vendor-led advances in technology.

Cultural change: High it can be very difficult to remedy operational myopia, and cultural
shifts require pervasive change to be supported at all levels of management.

Process change: Moderate major shifts to higher levels of innovation may require moderate
process change to address greater risks and associated resistance.

Competitive value: High Better products and services, and potential improvements to all
critical corporate metrics.

Benefit Rating: Moderate


Market Penetration: 20% to 50% of target audience
Maturity: Adolescent
Recommended Reading:
"What a World-Class IT Innovation Charter Should Contain and Why You Need One"

Mobile QoS for LTE


Analysis By: Sylvain Fabre
Definition: Customer experience monitoring gives communications service providers (CSPs) the
ability to better assess a customer's experience with their devices, services, apps and connectivity.
Quality of service (QoS) therefore includes more than just the index related to voice calls; it can be
used to define service-level agreements expected to be an integral way of monetizing Long Term
Evolution (LTE).
Position and Adoption Speed Justification: Although seen as a key component of LTE, QoS
might not be fully utilized in these networks at first mostly because QoS measurement or other
activities aren't vitally necessary if the network is underutilized, as customer experience is generally
fine. This was the case with Verizon's Turbo Boost button, for example, which was not very popular
when first launched because subscribers were satisfied with their speeds.
It is currently unclear how widely mobile QoS will be used, even in all-Internet Protocol (IP) mobile
networks based on LTE. According to the Third Generation Partnership Project (3GPP) QoS Class
Identifier (QCI) definition, real-time gaming, voice over IP (VoIP) and voice over IP Multimedia
Page 60 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Subsystem (VoIMS) are the most QoS-sensitive applications for both wireline and wireless
networks. Furthermore, most LTE deployments so far focus on data-only mobile applications.
For LTE operators, mobile QoS can be used as a differentiator for their service offerings. CSPs will
be able in future to adopt chargeable QoS for prioritized subscribers or applications.
QoS has been available for years via the fixed-line Internet, but is not widely used as ISPs and users
are accustomed to a "best effort" level of service. It is possible that mobile QoS over LTE
technology may also take between two and five years to gain adoption.
Voice calls over 3G cellular networks are circuit-switched, so have less need for QoS than VoIP
networks. However, the need for quality of experience (QOE) will increase when mobile operators
move to LTE technology not just around data services, but also for VoLTE or VoIP over LTE.
Mobile operators' business models evolve, with data services now more important for revenue than
traditional voice services. With the network being upgraded to LTE and becoming fully based on IP,
mobile QoS will extend to characteristics traditionally associated with fixed-line Internet services, to
become end-to-end QoS.
According to the 3GPP, the voice solution for LTE is VoIMS (or VoLTE), which is fully specified as a
future Evolved Packet System solution. QoS has progressed from user equipment initiation only (in
the 3GPP's release 5 standard) to include policy-based networking element initiation in release 8.
The policy-based networking element is called the policy and charging rules function (PCRF), which
is interconnected with the Evolved Packet Core and IMS control network. QoS is based on the
control of the QCI and the Allocation and Retention Priority. The QCI will be used to reserve
resources at the gateway and in the radio access network. High-QCI applications include real-time
gaming and VoIMS.
Since the established concept of QoS for Global System for Mobile Communications (GSM) and
wideband code division multiple access (WCDMA) networks is somewhat complex, LTE System
Architecture Evolution (SAE) aims for a concept of QoS that combines simplicity and flexible access
with backward compatibility. LTE SAE adopts a class-based concept of QoS that gives operators a
simple yet effective way to differentiate between packet services. Evolved Universal Terrestrial
Radio Access Network, an all-IP architecture, will support end-to-end QoS.
User Advice:

Put quality of service at the forefront of your brand strategy, but factor in the costs as QoS
means major, sustained investment in networks that may not be sustainable for CSPs.

Implement and enforce mobile QoS with a variety of means such as self-organizing networks,
operations support system, and service assurance.

Realign and streamline network management and service assurance solutions.

Define requests for quotation for mobile QoS with a clear service assurance strategy, taking into
account performance management, test and measurement solutions, and fault and event
management solutions.

Page 61 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Consider working with a single strategic partner that can provide a clear perspective of a longterm evolutionary path for service assurance, across services, platforms and departments.

Business Impact:

End-to-end QoS is vital not only for video and VoIP quality, but for any service. End-to-end QoS
control will ensure that LTE can provide telco-quality video services and VoIP.

The voice call continuity service in the IMS core will ensure that VoIP services are interoperable
with circuit-switched voice and VoIP services on existing Universal Mobile Telecommunications
System networks. However, the latency needs to be improved to provide 2G-/3G-equivalent
voice quality.

End-to-end traffic shaping is now allowed in regions such as the U.S., where net neutrality is
now nullified.

Benefit Rating: High


Market Penetration: 1% to 5% of target audience
Maturity: Adolescent
Sample Vendors: Alcatel-Lucent; Amdocs (Bridgewater Systems); Cisco; Ericsson; Huawei; NEC;
Nokia; Oracle (Tekelec); ZTE
Recommended Reading:
"Vendor Rating: Ericsson"
"Vendor Rating: Nokia"
"Toolkit: Developing KPIs From Customer Experience Monitoring Equipment"

Mobile Unified Communications


Analysis By: Bill Menezes
Definition: Mobile unified communications (mUC) is the integration and presentation of unified
communications (UC) capabilities, such as single-number reach, presence and instant messaging
(IM) on enterprise wireless devices. Many of the applications found and used on desk phones have
become available on smartphones and tablets; all leading UC vendors offer mobile UC clients for
the leading mobile device platforms.
Position and Adoption Speed Justification: Vendors and telecom service providers continue to
introduce or update mobility features for their UC products or services and to develop support for
Web real-time communications (WebRTC) and other emerging solutions that offer UC elements.
Vendors also are integrating conferencing and collaboration technologies further into UC platforms,
enabling users to share content and their screens more easily as more users move from pure
audioconferencing to Web and videoconferencing from their smart devices.

Page 62 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

With enterprise adoption of Internet Protocol (IP)-based voice systems, more enterprises have the
option to integrate wireless device support. Current generation smartphones already can use
messaging, presence, voice and other mUC functions from vendors such as Microsoft (Lync) and
Cisco (Jabber), or ShoreTel, which has an agnostic mUC solution that interoperates with many UC
core platforms. Most of the functionality of mUC is integrated into IP telephony systems, and
changes in licensing methods mean it's no longer more expensive to use a mobile phone than a
desk phone.
As a result, mUC will become a standard, integrated capability across most vendors supporting UC,
making this stand-alone technology obsolete before plateau. As the enterprise telephony plan
emerges, evaluate users' mobility requirements and their needs to integrate wired and wireless
communications into a single device.
User Advice: Create significant value to users and the company by integrating smartphones and
tablets which are proliferating in the enterprise into the corporate network. While desk phones
and desktop video devices will continue to have a role in many organizations, it will be more
effective to meet the needs of many office workers by making the user's mobile device both the
default work phone and the desk video device. View mUC clients within the context of an
enterprise's broader enterprise and mobile strategies for communication, collaboration and content
management. Work closely with users to identify business functions and roles that can benefit from
mUC, and educate users about the opportunities.
Establish minimal standards for smartphone performance given the processing capability for
supporting not only mUC, but other tasks, such as the organization's mobile device management
policies. Address the needs of your mobile users first, and then evaluate support for presence/
status and click-to-conference call applications that will improve use and productivity, as well as IM
and presence status that are frequently used on the desktop. Look to incorporate mobility into nextgeneration IP telephony projects. Encourage employees with company-owned mobile devices to
use mobile applications in place of desk phones. Planning should include the possibly expensive
cost of wireless office coverage, via a cellular distributed antenna or enhanced wireless LAN
(WLAN), if those solutions are not already in place.
Business Impact: mUC can transform how vendors support communications for mobile users in
the enterprise by improving highly mobile workers' access to the enterprise network. Those workers
must also be able to readily connect to aid business processes and to collaborate with in-office
colleagues. The single-number feature simplifies access to employees and is location-independent,
while enabling consolidation of wired and wireless services, elimination of wired desk phones and
support for anytime, anywhere access to enterprise communications.
Benefit Rating: High
Market Penetration: 20% to 50% of target audience
Maturity: Adolescent
Sample Vendors: AT&T; Avaya; Cisco; Motorola; ShoreTel; Sprint; Tango; Telepo; Verizon
Enterprise Solutions; Vodafone

Page 63 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Recommended Reading:
"Optimize UC Adoption With These Best Practices"
"Focus on User Experience and Quality to Drive Softphone Adoption"
"HIghly Disruptive Unified Communications Technologies Shaping the Industry's Trajectory"
"Mobile UC: Extending Collaboration to Smartphones and Tablets"
"Magic Quadrant for Unified Communications as a Service, North America"

Telecom Analytics
Analysis By: Kamlesh Bhatia; King-Yew Foong
Definition: Telecom analytics, encompassing business intelligence (BI) technologies, is used by
communications service providers (CSPs) to either uplift revenue or save cost. Extended
capabilities include ad hoc querying and multidimensional analyses, predictive and descriptive
modeling, data mining, text analytics, forecasting and optimization. Telecom analytics can improve
visibility into core operations, internal processes and market conditions, as well as discern trends
and establish forecasts.
Position and Adoption Speed Justification: Telecom analytics are usually employed by CSPs to
gain a better understanding of the efficiency of their own internal processes, improve operational
performance and evaluate market conditions. Common uses of analytics by CSPs include customer
retention programs, cross-selling and upselling, customer segmentation, fraud prediction modeling,
service and sales effectiveness, service pricing and demand forecasting.
As CSPs globally increase their focus on information as a means to achieve growth and competitive
advantage, the use of analytics has moved deeper into operations to achieve a real-time view of
customers and assets. New sources of information now include deep packet inspection, home
subscriber servers, video optimization equipment and on-device clients.
The increasing use of software in service and network management is blurring traditional
information technology/operational technology (IT/OT) boundaries, thus creating new opportunities
for CSPs to access real-time data and apply advanced modeling techniques like predictive
analytics. For example, CSPs are deploying analytics-driven capabilities in the network to predict
traffic patterns and failures to enhance customer experience. In other cases, network optimization is
achieved through linking analytics with intelligent policies and charging control solutions.
Tools and techniques that contribute to big data environments will lead to new investment by CSPs
in analytics, especially to link customer usage and behavior to activity on partner and social
platforms, with the aim to better monetize events and trends. CSPs entering adjacent or converging
markets such as media and entertainment will also spur interest in improved analytics capabilities.
The position on the Hype Cycle is indicative of heightened awareness among CSP teams about use
of analytics. Use cases, especially those focused on customer management, have been

Page 64 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

implemented with the focus now shifting to more complex requirements based on real-time and
predictive capabilities across the IT/OT environment.
User Advice: Extend the use of analytics into areas with high operational impact and the potential
to deliver immediate benefits, which include service assurance, especially in the context of nextgeneration, multiservice networks.
Redesign key performance indicators (KPIs) and processes to derive maximum benefits from
telecom analytics. This ensures that the right data and information reaches executives that are
empowered to make appropriate decisions. Lean processes will ensure that decisions can be made
in real time or near real time. More often than not, it is people (not technology) that play a pivotal
role in determining success.
Encourage better data governance to enrich visibility afforded by telecom analytics. Address
stakeholders' (such as business partners, regulators and customers) concerns around use and
storage of data.
Consider working with vendors that offer out-of-the-box capabilities with well-defined outcomes
and shared risk models, especially in new areas around digital services enablement and
monetization.
Business Impact: Telecom analytics can help CSPs effectively recognize, understand and respond
to emerging operational and market developments. Successful analytics deployment can provide a
competitive advantage through greater agility to exploit market opportunities and correct internal
organizational shortcomings.
Telecom analytics can also help CSPs to invest in areas that are truly critical and strategic, with
more efficient allocation of capital investments. Astute use of telecom analytics can also go a long
way toward enhancing customer experience and real-time response, which in turn leads to better
customer satisfaction and a higher lifetime value for customers.
Benefit Rating: High
Market Penetration: 20% to 50% of target audience
Maturity: Early mainstream
Sample Vendors: Amdocs; Ericsson; Flytxt; Guavus; IBM; KXEN; Nokia Solutions and Networks;
Oracle; Pontis; SAS
Recommended Reading:
"Market Insight: Data Monetization by CSPs, Worldwide, 2014"
"Research Roundup: CSP Strategy for Monetizing Customer Data"
"Market Insight: Calculating the Value of CSP Customer Data"

Page 65 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

"Market Trends: Linking Network and IT Intelligence to Drive Context-Aware Customer Experience"

Real-Time Infrastructure
Analysis By: Donna Scott
Definition: RTI represents a shared IT infrastructure in which business policies and SLAs drive the
dynamic allocation and optimization of IT resources so that service levels are predictable and
consistent despite unpredictable IT service demand. RTI provides the elasticity, functionality, and
dynamic optimization and tuning of the runtime environment based on policies and priorities across
private, public and hybrid cloud architectures. Where resources are constrained, business policies
determine how resources are allocated to meet business goals.
Position and Adoption Speed Justification: The technology and implementation practices are
immature from the standpoint of architecting and automating an entire data center and its IT
services for real-time infrastructure (RTI). However, solutions have emerged that optimize specific
applications or infrastructure. From an application standpoint, creating cloud native or optimized
services typically involves elasticity enabling scale-out capacity results in increased response to
increased demand.
These are often built through PaaS or through cloud management platforms (CMP). Many CMP
vendors have enabled models or automation engines to achieve RTI (for example, through the
implementation of logical service models with policies that are defined for the minimum and
maximum instances that can be deployed in a runtime environment). Building elasticity is not
turnkey; rather, IT organizations must still write custom code (for example, automation and
orchestration logic) to achieve their overall dynamic optimization goals and when to trigger or
initiate the optimization Moreover, although virtualization is not required to architect for RTI, many
CMP solutions only support virtualized environments instead of offering more complex alternatives
that require integration to physical resources. CMPs often include some infrastructure optimization
functionality such as intelligent placement of workloads or services based on pre-established
policies. Moreover, infrastructure management tools than analyze service capacity and use of the
infrastructure may be employed to rebalance the load within or across clusters.
A key underlying requirement for dynamic optimization is software-defined anything (SDx), to enable
automation and optimization through APIs and programmability. Gartner believes that RTI will be
subsumed in SDx terminology over the next few years as it gets more clearly defined and more
offerings come to market. As in the past, we will see individual vendor progress, especially in
"software stacks," but not in largely heterogeneous environments because of the lack of standards,
as well as the desire for vendors that build such functionality to create benefits for their platforms
(and not their competitors' platforms).
While RTI is increasing in use and implementation, it still has a fairly low penetration rate of around
20% of large enterprises, and a much lower rate when considering the proportion of implemented
services. Lack of architecture and application development skills in the infrastructure and
operations (I&O) organization hampers implementation of RTI in all but the most advanced
organizations. For customers who desire dynamic optimization to integrate multiple technologies
together and orchestrate analytics with actions, a great deal of integration and technical skills are
Page 66 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

required. However, organizations that pursue agile development (and DevOps) for their Web
environments will often implement RTI for these services in order to map increasing demand on
their sites with an increasing supply of resources.
As this demand rises due to new developments in mobile computing, Web, analytics, and the
Internet of Things, we will see greater penetration in these new application environments, as they
will be built for elasticity. In another RTI use case, enterprises are implementing shared disaster
recovery data centers, whereby they dynamically reconfigure test/development environments to
look like the production environment for disaster recovery testing and disaster strikes. This type of
architecture can typically achieve recovery time objectives in the range of one to four hours after a
disaster is declared. Typically, implementation is not triggered automatically but is manually initiated
where the automation is prewritten.
User Advice: Surveys of Gartner clients indicate that the majority of IT organizations view RTI
architectures as desirable for gaining agility, reducing costs and attaining higher IT service quality
and that about 20% of organizations have implemented RTI for some portion of their portfolios.
Overall progress is slow for internal deployments of RTI architectures because of many
impediments, especially the lack of IT management process and technology maturity levels, but
also because of organizational and cultural issues.
RTI is also slow for public cloud services, where application developers may have to write to a
specific and proprietary set of technologies to get dynamic elasticity. While less so as in prior years,
Gartner sees technology as a significant barrier to RTI, specifically in the areas of root cause
analysis (which is required to determine what optimization actions to take), service governors (the
runtime execution engines behind RTI analysis and actions) and lack of standards. However, RTI
has taken a step forward in particular focused areas, such as:

Dynamic and policy-based provisioning of development/testing/staging and production


environments across private, public and hybrid cloud computing resources

Optimally provisioned cloud services based on capacity and policies (for example, workload
and service placement)

Server virtualization and dynamic workload movement and optimization

Reconfiguring capacity during failure or disaster events

Dynamic scaling of application instances especially for Web oriented scale-out applications
built using PaaS or through CMPs.

IT organizations that desire RTI should focus on maturing their management processes using ITIL
and maturity models (such as Gartner's ITScore for I&O Maturity Model) as well as their technology
architectures (such as through standardization, consolidation and virtualization). They should also
build a culture that is conducive to sharing the infrastructure and should provide incentives such as
reduced costs for shared infrastructures.
Gartner recommends that IT organizations move to at least Level 3 proactive on the ITScore
for I&O Maturity Model in order to plan for and implement RTI; before that level, a lack of skills and

Page 67 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

processes derails success. Moreover, for organizations using agile development, operational skills
and capabilities should be infused into them to enable RTI. Organizations should investigate and
consider implementing RTI solutions early in the public or private cloud or across data centers in a
hybrid implementation, which can add business value and solve a particular pain point, but should
not embark on data-center-wide RTI initiatives.
Business Impact: RTI has three value propositions, which are expressed as business goals:

Reduced costs that are achieved by better, more efficient resource use and by reduced IT
operations (labor) costs

Improved service levels that are achieved by the dynamic tuning of IT services

Increased agility that is achieved by rapid provisioning of new services or resources and scaling
the capacity (up and down) of established services across both internally and externally sourced
data centers

Benefit Rating: Transformational


Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: Amazon; BMC Software; IBM; Microsoft; Oracle; Red Hat; RightScale;
ServiceMesh; VMTurbo; VMware
Recommended Reading:
"Cool Vendors in Cloud Management, 2014"
"Market Guide for Cloud Management Platforms From Large Software and Emerging Vendors"
"Market Guide for Integrated Infrastructure Systems Cloud Management Platforms"
"Market Guide for Open Source Cloud Management Platforms"
"How to Build an Enterprise Cloud Service Architecture"

Machine-to-Machine Communication Services


Analysis By: Sylvain Fabre; Eric Goodness
Definition: Managed machine to machine (M2M) communication services encompass integrated
and managed infrastructure, application and IT services to enable enterprises to connect, monitor
and control business assets and related processes over a fixed or mobile connection. Managed
M2M services contribute to existing IT and/or are operations technology (OT) processes. M2M
communication services are the connectivity services for many Internet of Things (IoT)
implementations.

Page 68 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Position and Adoption Speed Justification: M2M technology continues to fuel new business
offerings and support a wide range of initiatives, such as smart meters, road tolls, smart cities,
smart buildings and geofencing assets, to name a few.
The key components of an M2M system are:

Field-deployed wireless devices with embedded sensors or radio frequency identification (RFID)
technology

Wireless and wireline communication networks, including cellular communication, Wi-Fi,


ZigBee, WiMAX, generic DSL (xDSL) and fiber to the x (FTTx) networks

A back-end network that interprets data and makes decisions (for example, e-health
applications are also M2M applications)

There are currently few service providers than can deliver end-to-end M2M services. The value
chain remains fragmented. Service providers are trying to partner with others to create a workable
ecosystem.
M2M services are currently provided by three types of provider:

M2M service providers. Mobile virtual network operators and companies associated with an
operator that can piggyback on that operator's roaming agreements (for example, Wyless, Kore
Telematics and Jasper Wireless).

Communications service providers (CSPs). Some CSPs, such as Orange in Europe and AT&T
in North America, have quietly supplied M2M services for several years. However, CSPs are
now marketing M2M services more vigorously, and those without a strong M2M presence so far
are treating it more seriously by increasing their marketing or creating dedicated M2M service
divisions (for example, T-Mobile, Telenor and Vodafone).

M2M service aggregators. These encompass traditional outsourcers and emerging players
that bundle connectivity into systems resale and integration (such as Modus Group or Integron).

One of the key technology factors that may affect M2M service deployment is the capability to
support mobile networks. Early M2M services were smart meters, telematics and e-health monitors,
which are expected to be widely used in the future. In its Release 10, the Third Generation
Partnership Project (3GPP) worked on M2M technology to enhance network systems in order to
offer better support for machine-type communications (MTC) applications. The 3GPP's TS 22.368
specification describes common and specific service requirements for MTC.
The main functions specified in Release 10 are overload and congestion control, and the recently
announced Release 11 investigates additional MTC requirements, use cases and functional
improvements to existing specifications. End-to-end real-time security will also become an
important factor when more important vertical applications are brought into cellular networks.
Another key factor on the technology side that may impact mass deployment of M2M
communication services is the level of standardization. Some key M2M technology components
(RFID, location awareness, short-range communication and mobile communication technologies,

Page 69 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

for example) have been on the market for quite a long time, but there remains a lack of the
standardization necessary to make M2M services cost-effective and easy to deploy, therefore
enabling this market to take off. M2M standardization may involve many technologies (such as the
Efficient XML Interchange [EXI] standard, Constrained Application Protocol [CoAP] and Internet
Protocol Version 6 over Low-Power Wireless Personal Area Networks [IPv6/6LoWPAN]) and
stakeholders, including CSPs, RFID makers, telecom network equipment vendors and terminal
providers. The European Telecommunications Standards Institute has a group working on the
definition, smart-metering use cases, functional architecture and service requirements for M2M
technology.
We expect that M2M communication services will be in the Trough of Disillusionment in 2015.
Procurement teams will perceive that prices are too high and the space unnecessarily complex (for
example, roaming or multi-country implementations) especially when contrasted to consumer/
wearables IoT that will use the smartphone as a gateway to the Internet.
User Advice: As M2M communications grow in importance, regulators should pay more attention
to standards, prices, terms and conditions. For example, the difficulty of changing operators during
the life of equipment with embedded M2M technology might be seen by regulators as potentially
monopolizing. Regulators in France and Spain already require operators to report on M2M
connections, and we expect to see increased regulatory interest elsewhere.
For the end user, the M2M market is very fragmented because no single end-to-end M2M provider
exists. A number of suppliers offer enterprise users monitoring services, hardware development,
wireless access services, hardware interface design and other functions. As a result, an adopter has
to do a lot of work to integrate the many vendors' offerings. On top of this, business processes may
need redefining.
While M2M is usually part of a closed loop OT environment run by engineering, it could be
facilitated and exploited by an aligned IT and OT approach. In some cases, M2M may be deployed
and supported by IT departments with adequate skills and understanding.
An enterprise's M2M technology strategy needs to consider the following issues:

Scope of deployment

System integration method

Hardware budget

Application development and implementation

Wireless service options

Wireless access costs

Business Impact: M2M communication services have many benefits for enterprise users,
governments and CSPs. They can dramatically improve the efficiency of device management. As
value-added services, they also have considerable potential as revenue generators for CSPs. The
success of these services will be important for CSPs' business growth plans.

Page 70 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

M2M communication services are expected to be the critical enablers for many initiatives that fall
under the "smart city" umbrella and contribute to the IoT. Examples are smart grid initiatives with
connected smart grid sensors to monitor distribution networks in real time, and smart transportation
initiatives with embedded telematics devices in cars to track and control traffic. M2M
communication services will also connect billions of devices, causing further transformation of
communication networks.
M2M communication services should be seen as an important set of facilitating technologies for use
in operational technologies. At an architectural level, particular care should be taken when choosing
M2M solutions to ensure they facilitate the alignment, convergence or integration of operational
technology with IT.
As CSPs' M2M portfolio broadens and goes beyond connectivity, the number of solutions aimed at
specific industry verticals is growing at a fast rate. Most CSPs with M2M offerings provide vertically
integrated, end-to-end solutions in the area of automotive, utilities, transport and logistics, and
healthcare, the latter of which is experiencing particularly fast growth for CSPs.
Benefit Rating: Transformational
Market Penetration: Less than 1% of target audience
Maturity: Adolescent
Sample Vendors: AT&T; Jasper Wireless; KDDI; Kore Telematics; Modus Group; Orange France;
Qualcomm; Sprint; Telefonica; Telenor; Verizon; Vodafone; Wyless

Next-Generation Service Delivery Platforms


Analysis By: Martina Kurth
Definition: A service delivery platform (SDP) is an environment or architecture (comprising a set of
systems, data and processes) that is built to allow the swift creation, deployment, execution and
orchestration of services, regardless of underlying technologies or where the service runs and is
delivered to. This may include Internet Protocol (IP) connectivity and services, third-party content,
and application and composite value-added (VAS) services.
Position and Adoption Speed Justification: CSPs can no longer sustain their traditional networkcentric approach for service delivery. SDP functionality has now shifted from merely enabling valueadded services for consumer applications to service innovation which is happening outside
telcos. This means that SDPs have become key business enablers as the focus shifts toward new
business models that entail third-party participation, revenue generation and improvements to the
customer experience. SDPs entail service creation, orchestration, service broker and composition
environments, cloud enablement, and exposure.
To tap into these new business models for connected digital services and remain relevant for their
customers, CSPs need to expose their network and IT capabilities and manage partner

Page 71 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

relationships better with the ecosystem of third-party developers and content and application
providers.
The proliferation of IP, Long-Term Evolution (LTE), mobile broadband and advanced multiscreen
devices, drives the adoption of SDPs. The SDP concept has now moved beyond the network and
service layer into customer-facing resources (CRM and self-service). This enables an end-to-end
process orchestration to ensure adequate customer experience through any digital interaction
channel. Functions comprise network and service exposure, device management, user device
clients, content management, end-user data, policies, next-generation operations support systems
and business support systems, parts of the control layer (IP Multimedia Subsystem or IP) and even
end-user applications.
Most CSPs have gone through numerous generations of SDPs, which results in complex, verticallyintegrated service delivery environments. CSPs are modernizing their SDP infrastructures to enable
the business case for new, converged and digital services, while mitigating legacy investment in
telecom networks and IT.
Additionally, SDP solutions provide the mechanisms to bridge the gap between the legacy and new
technologies and digital channels, allowing exposure, brokering, composition and orchestration of
composite services. This comprises external resource and service components, such as intelligent
networks, IP, LTE, Web-based services and third-party content.
Service brokers, along with parts of the platform, have moved into the cloud. This includes network
and service exposure, for example, and APIs for third-party content providers focusing on content
creation and delivery, as well as related partner settlement and charging. We are seeing
deployments of convergent SDP and cloud infrastructures and the sharing of common requirements
and support functions. This helps to create new, composite cloud services representing additional
revenue potential for CSPs.
User Advice:

Transform your SDP environments based on a modular, incremental and horizontal evolution.
Invest in a best-of-breed, IT-centric SDP infrastructure on top of existing legacy (network)
SDPs, then gradually retire legacy SDPs. Transform silo-based service infrastructures into an
open standards-based SDP architecture.

Place immediate efforts on pragmatic enhancements to existing telco services to leverage


legacy assets for new composite services. Customer profiling, contextual analytics and
improvements to the user experience will be imperative.

Ensure competitiveness by exposing network assets to third parties and embrace third-party
content to support new and more developed business models to remain relevant to their
customers. For example, enablers such as presence and location could be exposed to third
parties to help build (and later on charge for) innovative IT services using social media on the
Internet.

For cost-efficiency and time-to-market reasons, evaluate alternative SDP delivery models, such
as hosted software as a service, platform as a service and the cloud. Multitenant services, third-

Page 72 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

party content creation and settlement, as well as enterprise services, may be particularly well
suited to these models.

Monitor the evolution of other relevant work involving standards, such as from the European
Telecommunications Standards Institute (ETSI) and relating to network function virtualization
(NFV)/SDN architectural frameworks. In five to 10 years, parts of SDP functionality may reside
deeper in the network, where applications are expected to drive more service delivery
processes simultaneously. In the meantime, invest in SDP infrastructures supporting hybrid
operational models (network, IT, and virtualization) and leverage investments in existing
infrastructures.

Business Impact: Investments in SDP environments have a profound long-term impact on CSPs'
capability to generate new revenue streams, as well as the service experience of end users. They
support the business case for new, converged and digital services without requiring heavy network
investments.
Driven by an Internet-oriented service experience, CSPs are transforming their service delivery
platform environments toward an architecture that enables service innovation and improved
customer experience. This means SDPs provide an opportunity for CSPs to capitalize on new
business models, such as business-to-consumer and business-to-business-to-consumer
strategies, as well as emerging machine-to-machine opportunities.
SDPs help to bridge the gap between CIO, CTO and CMO as these roles shift toward becoming
business technology enablers in the context of CSPs' transition toward digital service providers.
Additionally, CSPs are starting to explore where and how to integrate pertinent technologies such
as Web Real-Time Communication, software-defined networking, network function virtualization
and converged over-the-top IP applications.
Flexible and open SDPs will also play an important role at the core of next-generation service
networks, where they will be crucial for enabling the "multidimensional" next-generation telco
business model. In this model, end users can also be "producers," additional revenue streams can
come from non-end-user third parties (advertisers, for example), and CSPs can work together to
increase their reach. Carriers are also enablers and wholesale providers.
Benefit Rating: Transformational
Market Penetration: 20% to 50% of target audience
Maturity: Early mainstream
Sample Vendors: Accenture; Alcatel-Lucent; Ericsson; HP; Huawei; IBM; Nokia Siemens Networks;
OpenCloud; Oracle
Recommended Reading:
"Market Insight: Service Delivery Platform Market Overview and Strategic Scorecard for Vendors,
2013"

Page 73 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

"Market Trends: Top Five Trends in Next-Generation Service Delivery Platforms Reflect Strategy
Shift, Worldwide, 2012"

Retail Mobile Payments


Analysis By: Miriam Burt
Definition: Retail mobile payment applications let customers pay for products or services via their
mobile devices, using a variety of tender types.
Position and Adoption Speed Justification: Retail mobile payment applications can be browserbased, message-based, and downloadable to a mobile device or to native applications on the
mobile device. Transactions are initiated or authorized through technologies such as NFC, SMS,
WAP and USSD.
The delivery of mobile payments in the retail market varies, with rates of adoption and growth of the
technologies involved in mobile payments differing by geography.
This topic continues to interest large multichannel retailers due to the ongoing plethora of solutions
from a multitude of vendors of hardware, software and card payment services and, in particular,
from smaller mobile payment vendors backing into the mobile point of service (POS) market for
small and midsize businesses. The hype on digital wallets is also increasing and driving retailer
interest. For example, the Merchant Customer Exchange (MCX) joint venture led by a consortium of
large multichannel retailers in the U.S. continues to garner publicity on its merchant-led mobile
wallet solution with Gemalto.
Due to the amount of hype, rather than large-scale rollouts in large multichannel retailers, this
technology has made another considerable leap forward on the Hype Cycle. More large
multichannel retailers are dabbling in pilots and trials, but the incessant hype and publicity seen in
preceding years have cooled a little. For this technology profile to successfully negotiate the trough
and rapidly ascend the Slope of Enlightenment, retailers will need to use the "doldrums" period to
reassess these technologies in terms of how consumers want to use their mobile devices to make
secure payments.
However, for the following reasons, it will be some time before consumers see the real value of
mobile payments:

Many consumers still believe that mobile payments are less secure than, for example, credit or
debit transactions conducted at the check-out in a physical store. In Gartner consumer surveys
over the past few years, one of the top barriers to using mobile payments continues to be
customer concerns about the security of personal and payment data with mobile payments.

Consumers' concerns regarding the resolution of problems are another major barrier.
Consumers also feel that the process was cumbersome and slower than using cash, checks or
cards at the check-out.

During the past few years, Gartner surveys on m-commerce retail consumer preferences
indicate that, in general, informational services such as checking prices and checking store

Page 74 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

locations on mobile phones were at the top of consumers' priority lists, with ordering and
payments at the bottom of the list. According to a recent Gartner retail survey, retailers
indicated that, on average, revenue generated through the m-commerce channel is likely to be
around 2.5%. However, the m-commerce channel will significantly impact driving sales to the
other channels, as retailers estimate that it is already driving 3% of sales into the store.

Widespread adoption, especially for NFC-based mobile payments, requires the convergence of
infrastructure with critical mass and the backing of financial institutions, telecommunications
providers, transportation entities and major retailers, together with clear regulations and
guidelines. Two major areas where collaboration is needed are compliance with industry data
security standards, and processes and procedures to deal efficiently and effectively with
customer services, such as investigation and resolution of issues relating to transaction
disputes, occurrences of fraud and chargebacks.

For these reasons, mass global consumer adoption of mobile payments (particularly for NFC-based
mobile payments) could still be around five to 10 years out.
User Advice: Retailers:

Mobile payments are not an amorphous mass. Understand the different types of mobile
payments available and how these map to your requirements in customer payment processes.

Don't let the projected rate of smartphone adoption or the hype around NFC-based contactless
mobile payments drive investments in this solution. Even in often-quoted examples of mobile
payment adoption in Japan, adoption rates of NFC contactless mobile payment are very low.
You could start with non-NFC-based solutions (for example, using a stored-value card payment
solution linked to loyalty). However, monitor the market for signs that the banks, carriers and
payment processors are moving to a unified standard for NFC payments, because this could
help accelerate the adoption of NFC-based payments.

From an investment point of view, find out the priority customers place on using mobile devices
as payment vehicles, how they want to use mobile devices for payment and how this stacks up
against other ways in which mobile devices could be used to generate sales. Our research
shows that the majority of payment transactions still take place in the store. When in a store,
consumers also say that staffed check-outs are still their preferred choice for the check-out
process (through the main bank of tills or through the retailer-provided mobile POS terminals
administered by associates), rather than self-service options, using technologies such as selfcheck-outs or their mobile devices.

Investigate the propensity of customers to use cash rather than cards to determine the types of
purchases that are likely to drive the adoption of mobile payment.

Get clarity on exactly what mobile wallet schemes can deliver. The mobile wallets should be
able to handle more than just presentment, because consumers will want to have control over
setting preferences for example, prescribing a preference for using particular cards with
particular retailers.

Page 75 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Pay careful attention to continuing customer concerns about the security and privacy of data
and developments regarding mobile payment standards, and demonstrate compliance. Work
with key stakeholders for example, banks, card payment companies and
telecommunications providers to ensure a streamlined customer process. Especially, make
provision for corrective action when things go wrong for example, disputed payments.

Monitor and assess the progress of contactless-payment transportation schemes in regions


where you operate. These will give a good idea of customers' acceptance and adoption of this
type of mobile payment solution, and provide any lessons learned.

If you are an MVNO or are offering financial services, note the adoption levels for all forms of
mobile payments from SMS-based transactions, to mobile NFC POS-based payments.

Business Impact: In the store, mobile payments could address the need for speed of throughput
and convenience at the cash register, which is particularly important in grocery and convenience
stores. If payment transaction fees using mobile devices are lower than those of credit and debit
cards, there are clear savings for the retailer. However, retailers do not see a robust business case
for upgrading POS terminals to accept contactless payments. The speed of adoption of mobile
payments will be dictated by consumers, so mobile payment solutions must demonstrate how they
can support a secure, hassle-free and, in particular, fast check-out.
In emerging economies, mobile payment may act as a viable alternative to cash. If consumers can
also access microcredit through their mobile payment accounts, then retailers may see higher
spending from transactions that used mobile payments.
Benefit Rating: Moderate
Market Penetration: 1% to 5% of target audience
Maturity: Adolescent
Sample Vendors: Gemalto; Google; M-Pesa (Safaricom); MasterCard; PayPal; Visa
Recommended Reading:
"Mobile Payments Are Not a Top Priority for Consumers"
"Survey Analysis: Tier 1 Retailers Must Capitalize on Consumer Use of Mobile as Key Gateway in
Cross-Channel Shopping"
"Survey Analysis: Multichannel Retailing Drives Revenue to Stores From E-Commerce, Mobile and
Social Shopping"

Mobile Subscriber Data Management


Analysis By: Kamlesh Bhatia
Definition: Mobile subscriber data management (SDM) is the activity whereby parts of datasets that
are related to the same subscriber (such as profile, transactional and operational data) are pulled

Page 76 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

together either physically onto a consolidated database, or virtually into a consolidated view
and then leveraged for insights. These databases are different from other IT-based systems; for
example, those used in operations support systems or business support systems.
Position and Adoption Speed Justification: The plan to bring together discrete sets of subscriber
data to achieve "one view of subscribers," their entitlements and services has been the cornerstone
of many transformation initiatives among communications service providers (CSPs). Many CSPs are
looking for ways to monetize this one view of subscribers through an extensible carrier-grade
platform. The contextual view of customers offered by SDM in real time is appealing to CSPs that
want to streamline their network architecture, unify silos and create a single point of management
and control. SDM can also be seen as a component of the information architecture that CSPs are
putting together to tackle big data in their enterprises.
The early excitement around SDM has been tempered, especially among mobile CSPs that are
grappling with issues around network evolution and the overall approach to information
architecture. While the concept of SDM is highly promising for CSPs that want to leverage network
intelligence for building new services and commercial models and optimize use of existing
resources, there are few applications and little revenue today that can be directly linked to SDM.
The hype around software and virtualization technologies may have further led some CSPs to
reconsider strategic plans and spending. As a result, the benefit rating of SDM has been lowered
from high to moderate.
Identity management (including for machine-to-machine services), sharing subscriber information
with partners and third-party providers through controlled exposure, and personalization and
segmentation through analytics and quality of service (QoS) of IP access are some applications that
can benefit from SDM. Meanwhile, SDM is being closely aligned with other control plane elements,
such as policy and charging rules functions, to bring out more subscriber and context-aware
capabilities that CSPs can leverage to generate new revenue or improve customer experience.
User Advice: Combine SDM with functions like CRM, Web analytics, policy management and deep
packet inspection (DPI) to potentially generate a whole new set of revenue by using real-time
marketing insights about subscribers. This can improve customer experience by helping to
proactively offer more tailored products and services to certain customer groups.
Plan for deeper integration of networking and IT technologies through SDM, especially in
conjunction with next-generation service delivery platforms.
Business Impact: The ability to capture and leverage mobile subscriber data offers the potential to
change the role of the mobile CSP. This could mean hardware and software platform upgrades
(which take time and investment), or, for example, CSPs securely exposing and utilizing subscriber
data from the network to profile customers for business purposes. Common use cases include
merchandising, product placement, advertising and promotion, pricing, loyalty programs,
experimental marketing and customer lifetime value. A unified view of subscribers across networks
can also help to resolve problems with inconsistent or duplicate data, cutting down revenue
leakage.

Page 77 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Benefit Rating: Moderate


Market Penetration: 20% to 50% of target audience
Maturity: Early mainstream
Sample Vendors: Alcatel-Lucent; Ericsson; HP; Huawei; Nokia Siemens Networks; Openet; ZTE
Recommended Reading:
"Competitive Landscape: CSP Policy Management Solutions"
"Competitive Landscape: Subscriber Data Management Solutions, Worldwide"
"Emerging Technology Analysis: How CSPs Can Cut Costs and Charm Customers With Integrated
Policy and Charging Control"

Service-Oriented Architecture in OSS/BSS and SDP


Analysis By: Mentor Cana
Definition: Service-oriented architecture (SOA) is an architectural style for IT system development
based on the loose coupling of executable services. With respect to communications service
providers (CSPs), it impacts the telecom operations management systems (TOMS) such as
operations support systems (OSSs), business support systems (BSSs) and service delivery
platforms (SDPs). The term "service," as used in SOA, defines a reusable software element that can
be combined with other software elements to create composite services and solutions.
Position and Adoption Speed Justification: Use of SOA enables CSPs to create an agile and
flexible service platform and IT infrastructure for the creation of new capabilities and services, and
for internal and external IT system integration. An SOA-enabled and compliant IT infrastructure can
enable CSPs to build complex end-user digital solutions and services by integrating services from
different partners in the broader digital ecosystem. It also enables CSPs to expose their services via
APIs for internal and external consumption.
The driving principle of SOA is the modular development of services and decoupling the
consumable layer of a service from its implementation layer. Thus, the objective of SOA is to make
service consumption independent of the technology it was developed with and the platform on
which it was hosted. In the context of SOA, a service can be a business process, complex
functionality, content, workflow, digital service, composite service or end-user service, as long as it
can be consumed through its manifestation layer usually exposed as a Web service via Web
Services Description Language (WSDL) specifications or as RESTful APIs.
The modular development of services, as well as the use of canonical data models, enables each
service to be reused by various composite services. For example, exposing an end user's location
as an open API enables CSPs to easily integrate the capability across services such as mapping,
social networking, and end-user-targeted location-based advertising.

Page 78 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

CSPs have traditionally integrated their back-office solutions using a variety of integration methods
such as point-to-point and enterprise application integration through custom adapters. This
has often resulted in integration complexities and a highly rigid architecture that is incapable of
scaling or fast service creation.
During the past few years, vendors of OSS/BSS and SDP products have enhanced their product
road maps to include a more service-based approach. TOMS vendors are in various stages of
advancement for upgrading their solutions to be SOA-compliant. Most leading vendors now offer
out-of-the-box SOA-compliant interfaces. This has made it possible for CSPs to evolve their
architectures in a modular and phased approach and to attempt a project- or solution-focused
adoption of SOA using best-of-breed components, usually driven by specific business outcomes
instead of one massive IT initiative. SOA has moved from the spotlight as a buzzword and cure-all
silver bullet. CSPs are gaining practical benefits in terms of cost reduction, rapid and agile
application development, faster time to market, and flexibility. As CSPs are becoming digital
businesses, the adoption of SOA is expected to accelerate SOA principles and related
technologies enable CSPs to expose their capabilities in modular ways for participation in the digital
ecosystem marketplace.
User Advice:

Implement SOA-driven initiatives and projects using a phased approach, starting with defined
functional areas in your architectures.

Target applications expected to undergo extensive and regular change those that are closed
to the user (such as order management, customer support and payments) for replacement
with SOA-enabled solutions or wrapped with SOA-compliant interfaces.

Beware of vendors overstating the benefits of SOA in relation to the existing legacy
environment. SOA does not come out of the box. There are two distinct, but complementary,
aspects of SOA: tools and methodology.

Invest in the adoption and application of SOA methodologies and approaches that will show
you how to build services that are usable, reusable, supportable and maintainable over time.

Assess whether your existing project management, software development life cycle (SDLC)
processes and canonical data models are a good match for the implementation of SOA
initiatives.

Ensure that vendors' solutions are compliant with and adhere to SOA principles.

Business Impact: SOA in the OSS/BSS stack has a significant effect in introducing speed,
flexibility, and agility as CSPs transform their operations from product-centric to customer-centric,
and to digital services. SOA-enabled digital service delivery frameworks, on the other hand, will
enable a platform for rapid digital service creation, innovation and faster time to market, while
reducing development costs due to reuse. CSPs should be aware that the ROI from SOA
implementation will be more visible and increase as the number of services deployed and used via
SOA infrastructure increases.

Page 79 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

The use of SOA methodologies and tools is a critical component in enabling CSPs to operate at
over-the-top speed. By opening up their capabilities via open platforms and open APIs for internal
use as well as external consumption based on SOA principles, CSPs can speed up their
participation in the broader digital ecosystem.
Benefit Rating: High
Market Penetration: 5% to 20% of target audience
Maturity: Early mainstream
Sample Vendors: Amdocs; Ericsson; HP; IBM; Microsoft; Oracle; SOA Software; Tibco Software
Recommended Reading:
"The API Development Model Offers CSPs New Opportunities and an Entry Into the Digital
Ecosystem"
"The Importance of Open Platform Strategies for CSPs Joining Emerging Digital Ecosystems"
"CSPs' Digital Business Requires a Bimodal IT Transformation Strategy"
"Marketing Essentials: The Choices CSP CIOs Are Making About SOA and Why Marketing Should
Care"
"How to Balance the Business Benefits and IT Costs of SOA"
"What You Need to Know About SOA Domains for Federated SOA"
"Competitive Landscape: SOA for Communications Service Providers, Worldwide"
"CSP IT Transformation Through 2015"

Location-Based Advertising/Location-Based Marketing


Analysis By: Annette Zimmermann
Definition: Location-based advertising (LBA): Refers to advertisements that appear on a mobile
device, including banner or text ads on a mobile Internet site or mobile application, including maps.
Location-based marketing (LBM): Addresses the user directly. Usually, the consumer receives a
message on a mobile device containing a call to action (to enter a competition, visit a website or
order a product, for example) and an incentive, such as a coupon.
Technologies such as GPS, cell ID, Wi-Fi, Bluetooth or geofencing can be used to trigger both LBA
and LBM.
Position and Adoption Speed Justification: Rollouts have been focused on the U.S. market, but
there is increasing interest in the European market. Here, we also see more traction in the area of
events/conferences and airports as well as some initial interest from banks looking at business-toPage 80 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

consumer solutions. Some technology providers, including Intersec, have made inroads into
emerging markets where they are working directly with communications service providers (CSPs) to
enable LBA and LBM. Nevertheless, mature markets remain the focus for most providers and
brands for now.
Geofencing is a key technique, in this context, that technology providers and retailers leverage to
deliver targeted LBA or LBM to consumers. Moreover, sports events, sightseeing attractions and
theme parks also provide good opportunities for LBM/LBA, and successful examples of this have
been seen in Japan. Public transportation is another good business case shown by the Canadian
city of Montreal. The city works with SAP's Precision Marketing platform; this entails a consumer
smartphone app for public transport, a Web interface for the 340 retail partners to launch their
campaigns, and analytics (to be provided to the retailers).
Users can opt into the program and select the content that is relevant to them. The high level of
interest in indoor positioning technologies and in particular in Bluetooth beacons recently has
driven retailers in Europe and the U.S. to push coupons and special offers to mobile phones while
the user is shopping inside their stores. Being able to do this during the shopping process (indoors)
can make the offer even more relevant. Implementations of Bluetooth beacons are expected to see
high uptake during 2014, with installations already recorded (and already happening in some cases)
in Europe (the U.K. and Sweden), the U.S. and Singapore. In the U.K., for example, match2blue is
currently installing 150 beacons in London's St. Pancreas Station.
LBM/LBA has seen some advancement during the past 12 months, so we have moved the profile
forward on the Hype Cycle. Apart from additional implementations described above, vendors such
as Google have enhanced their capabilities in this area, and this is driving the technology forward.
Recent enhancements to Google's Now mean that it alerts you when you are near a store where a
product you previously searched for online is available. LBM/LBA will become more intelligent and
form part of the cognizant computing scenario.
It is crucial for brands and retailers of consumer goods to gain access to the data analytics essential
to providing more targeted offers, and to show measurable results. We therefore believe that data
and analytics are key competitive assets in this (currently) very competitive market. In such a
complex and competitive ecosystem, it cannot yet be determined which will be the dominant
vendor; several players have successfully implemented their solutions in a particular geographic
region, and we may therefore see different winners in different markets. In the cognizant computing
scenario, vendors owning a lot of useful user data in order to make their offer relevant are
most likely to succeed.
From a demand point of view, Gartner sees growing interest in location-based offerings as long as
they are linked to a direct benefit for the user as several primary studies have shown.
Current inhibitors to market development include the following:

The retail industry seems to be the low-hanging fruit for technology providers wanting to
implement their solutions with large retail chains. This has been going well in the U.S. market. In
Europe, however, a different go-to-market strategy is needed. Retailers, especially

Page 81 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

supermarkets, are often local or regional and operate on very low margins, which means that
these organizations are less willing to experiment with new technologies.

For advertisers there are some modal complexities. Even the largest banner ads are considered
by many advertisers to be too constrained to capture much attention. This is driving many
advertisers toward in-app interstitial formats, which can be full-screen but may be less
conducive to LBA contexts. In general, we advise advertisers to look for relevant, nonintrusive
and motivational incentives rather than full-screen ads. One example is Kiip, which offers
rewards based on user achievements in mobile apps such as finishing a level in a game or
learning course, or reaching a goal in a fitness app. This method is probably a way to make
consumers more receptive to the ads.

The concern of users and regulators regarding location privacy may become a provider's
concern when the privacy settings and data usage is not sufficiently transparent to the user.
Even though most mobile users (especially those in generation Y) are increasingly open to such
new services and less suspicious about their data privacy service providers should not
underestimate the backlash that can arise after a prominent data breach.

User Advice:

Lead with fast-ROI services while LBA/LBM is maturing. Coupons, for example, appeal to a
consumer's desire for a bargain, their uptake is easy to measure and they fit well into the retail
industry which is the most important adopter of LBM.

Technology providers should work with retailers using geofencing techniques and define
business models around this.

For small LBA/LBM providers:

Get to market quickly. Focus on the user experience and reporting capabilities to attract
advertisers and potential acquisition offers from larger providers looking to enter the market.

Business Impact: We see the strongest impact of LBM and LBA in the retail industry, with most
initiatives in the U.S. and Western Europe.
Benefit Rating: High
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: Apple; Facebook; Foursquare; Google; Intersec; match2blue; Point Inside;
Qualcomm; SAP; Shelfbucks
Recommended Reading:
"Market Trends: Digital Map Data, Worldwide, 2014"
"Market Trends: iBeacon Will Generate New Revenue Streams for Mobile App Developers in 2014"

Page 82 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Climbing the Slope


Content Integration
Analysis By: Gavin Tay
Definition: Content integration refers to the consolidation of enterprise content, typically dispersed
throughout enterprises in a myriad of repositories, into a single view. Integration tools may sit above
these repositories as data integration middleware, or above workflow and business process
management systems, to provide a unified interface with federated content.
Position and Adoption Speed Justification: Content integration became obsolete before reaching
the Plateau of Productivity. This was largely because the long-term prospects for custom
connectors were limited, partly due to the difficulty of maintaining them and partly due to the
emergence of Web services and representational state transfer application programming interfaces
(APIs). Other integration options, such as the Java Specification Request (JSR) 170/283 standard,
also did not take off.
Many of the enterprise content management (ECM) suites that used connectors were folded into
their own products to integrate among solutions that vendors themselves acquired, and were no
longer offered for commercial use. Examples include IBM's Content Integrator, which federates
content within the IBM portfolio, and EntropySoft, which had original equipment manufacturer
agreements with IBM and EMC (Documentum) but was acquired by salesforce.com.
Content integration continues to be hyped, however, because the vast majority of enterprises
continue to have multiple content repositories increasingly with the combination of on-premises
ECM suites and cloud content repositories such as enterprise file synchronization and sharing
(EFSS) solutions. EFSS vendors have brought about a resurgence and continued use of Content
Management Interoperability Services (CMIS) or proprietary connectors, as a major advantage of
amalgamating multiple repositories for the purpose of mobile access to enterprise content.
There is a potential impact from CMIS, the most important industry-sponsored standard, having
gained the support of IBM and Alfresco but also emerging with most of the other major ECM
vendors or EFSS providers. Many enterprises are also considering using user experience platforms
(UXPs), portals and federated or contextual search as options for the virtual consolidation of
frequently used content at different levels of abstraction.
User Advice: Enterprises should pick content management vendors that have standardized and
easily accessible repositories. Longer term, the focus should be on CMIS version 1.1, which was
approved as a standard in December 2012 by the Organization for the Advancement of Structured
Information Standards (OASIS). As with all standards in their infancy, it will take a while before all
vendors become compliant. The preliminary aim with CMIS is to provide information sharing across
CMIS-enabled repositories, but the value may ultimately increase by allowing those repositories to
coexist even as they feed search engines, portals and UXP applications with more information at
lower cost and with less complexity.

Page 83 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

One immediate benefit may be a single view into content repositories via a CMIS-enabled "content
client" that is richer than what has typically been delivered by ECM vendors. Mobile-enabled CMIS
applications or browsers have not gained as much traction even as organizations look to bring their
content and connectivity out into the field.
Enterprises must look beyond both JSR 170/283 and Web Distributed Authoring and Versioning
(WebDAV), which either did not bear fruit or are very old. Integration architectures from vendors
such as IBM, Oracle (Context Media) and Adobe, and third-party offerings such as those of TSystems' Vamosa have also become obsolete or defunct. Most system integration partners would
also have toolkits to connect the products they support with multiple repositories and business
applications, such as ECMG, which has built connectors.
Business Impact: Content integration technology was slated to have improved interoperability
between a company's content, its content-centric processes and related data. Despite this promise,
the ECM market underwent consolidation itself, so these tools became increasingly unnecessary.
Many of the content integration solutions were subsequently acquired by these large ECM vendors,
to fulfill interoperability among their own solution offerings and those of the newly acquired solutions
but are not being resold for use on their own.
Connecting content to structured data and to end users in a more engaging manner has taken over,
but has many implications for commercial applications. Content analytics is becoming an alternative
to hardwired integration approaches. As a result, this will support both governance and costreduction initiatives by optimizing information assets for availability.
Benefit Rating: Moderate
Market Penetration: 5% to 20% of target audience
Maturity: Mature mainstream
Sample Vendors: Adobe; Alfresco; EMC; HP; IBM; Nuxeo; OpenText; Oracle
Recommended Reading:
"New Information Use Cases Combine Analytics, Content Management and a Modern Approach to
Information Infrastructure"
"The Emerging User Experience Platform"

Infrastructure as a Service (IaaS)


Analysis By: Lydia Leong
Definition: IaaS is a standardized, highly automated offering where compute resources,
complemented by storage and networking capabilities, are owned by a service provider and offered
to the customer on demand. The resources are scalable and elastic in near real time, and are
metered by use. Self-service interfaces are exposed directly to the customer, including an API and a

Page 84 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

GUI. The resources may be single-tenant or multitenant, and are hosted either by the service
provider, or on-premises in the customer's data center.
Position and Adoption Speed Justification: In practical terms, IaaS is on-demand computing
capacity rented from a service provider. Rather than buying servers and running them within their
own data centers, businesses simply rent the necessary infrastructure from a service provider in a
shared, scalable, "elastic" way, and access it via the Internet or a private network. In some
organizations, IaaS may eventually replace the traditional data center.
Cloud-based compute infrastructure services are now used to address a broad range of use cases.
While careful attention still needs to be paid to selecting an appropriate provider, architecture, and
security controls and customers must address governance, risks and regulatory compliance
IaaS is a mainstream technology that can be used to host most workloads, including missioncritical enterprise applications. The best use of IaaS is transformational, where it can offer
significant benefits in business agility, operations quality, and cost. However, customers can also
successfully use IaaS simply as a form of outsourced virtualization.
IaaS is most often used to improve developer agility, spanning the entire application life cycle from
development to production. Many customers use these services as test and development
infrastructure for pilot projects, rapid application development environments and formal lab
environments. As test and development-specific features and management tools improve, formal
development environments will become more common. IaaS can also be used to improve the agility
of other technical users, such as scientists and engineers. Batch-oriented, compute-intensive
workloads (such as modeling, simulation, scientific computing and one-time processing needs like
transcoding) are highly cost-effective in the cloud. Big data use cases are increasingly common.
It is also common to use IaaS to host websites and Web-based applications, especially those that
serve a consumer audience via the Internet. IaaS is also frequently used to serve internal
applications to users within the enterprise, including hosting applications like Microsoft SharePoint.
IaaS may also be used as the back end to a mobile application, such as an iPhone app. These uses
of IaaS are convergent with the general Web hosting market and features, and capabilities formerly
available only on dedicated hardware are now being extended to shared cloud resources.
These services are maturing and being adopted most quickly in the U.S. Although global demand is
robust, including in emerging markets, the growth of the market is slower outside the U.S. due to a
number of factors including:

Less competition

Less mature offerings

Fragmentation resulting from regulatory and data-sovereignty requirements

The need to keep data and processing in-country

User Advice: Cloud provider capabilities vary significantly, but many offer strong SLAs backed by
financial penalties, high levels of security, and solid service and support. Businesses can safely
adopt these services. The risks are not significantly greater than other outsourced hosting

Page 85 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

approaches, assuming the cloud services used match the service-level and security needs of the
applications.
Most enterprises have begun to adopt IaaS in a strategic way and have a broad range of
workloads on IaaS, including production applications but IaaS still represents only a small
percentage of their overall workloads. Midmarket businesses have been slower to adopt IaaS, and
many are still in the initial stages of adoption, but are more likely to believe that IaaS will replace
nearly all of their own data center infrastructure over the next five years. Businesses that have not
yet trialed IaaS should consider pilot projects for test and development, compute capacity
augmentation, Web content and applications. Successful pilots can be expanded into broader
production use.
Both public multitenant and private single-tenant offerings are available, but the distinction between
public and private cloud IaaS is blurring. The most cost-effective clouds are highly standardized and
use a shared capacity pool. Hybrid public/private cloud offerings enabling "cloud bursting" for
on-demand capacity and business continuity currently exist but the technology will not be
mature until at least 2016.
This market is evolving extremely quickly, so the suitability of these services should be re-evaluated
at least once every six months.
Business Impact: Cloud compute infrastructure services will be broadly advantageous to IT
organizations. The cost benefits, driven primarily by automation, will be particularly significant for
small and midsize businesses. Larger enterprises will benefit primarily from greater flexibility and
agility, rather than direct cost reduction.
In the short term, the benefits will be driven primarily by rapid provisioning that requires minimal
manual intervention. Over the longer term, more system management tasks will be automated,
leading to more efficient infrastructure management. Organizations that simply "lift and shift"
workloads to the cloud will reap limited cost and efficiency benefits compared to those who use
IaaS as a way to drive IT transformation.
The metered-by-use attribute of these services will result in more efficient use of capacity and their
self-service nature will empower employees outside of IT operations, improving developer
productivity and making it easier for business buyers to obtain infrastructure.
Benefit Rating: High
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: Amazon Web Services; CenturyLink; CSC; Microsoft (Azure); Rackspace; Verizon
Terremark

Mobile CDN
Analysis By: Akshay K. Sharma
Page 86 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Definition: A mobile content delivery network (CDN) is used to improve performance, scalability
and cost efficiency, for the delivery of content over mobile networks. It is like a traditional fixed
CDN, but has added intelligence for device detection and content adaptation, as well as
technologies to solve the issues inherent in mobile networks that have high latency, higher packet
loss and huge variations in download capacity.
Position and Adoption Speed Justification: Using a mobile CDN is less expensive than buying
servers and bandwidth, which is crucial for mobile communications service providers (CSPs).
Mobile CSPs are offering their own multiscreen video solutions as a type of over-the-top service
from fixed devices to tablets and smartphones, across disparate access networks (such as Wi-Fi,
Long Term Evolution [LTE] and High-Speed Packet Access Evolution [HSPA+]).
Newer optimization techniques, including video caching, transcoding and multicasting, are being
used in mobile core networks, as components of CDN offerings. LTE Broadcast (Enhanced
Multimedia Broadcast Multicast Services [eMBMS]) services may emerge as key drivers for this
technology. KT has a live network, and trials and demonstrations have been conducted by mobile
CSPs including AT&T, China Mobile, KPN, Orange, Portugal Telecom, Telstra and Verizon.
Traffic shaping and peering relationships between CDN providers and content owners may become
more important. This will be especially likely with the advent of real-time charging and policy-based
control for services such as HD video on demand and of "CDN on a blade" within mobile packet
core solutions (announced by Akamai together with Ericsson, but yet to be deployed in any
meaningful way).
Concerns have been raised about whether mobile CDN solutions can support digital rights
management-enabled content or address multiple similar but proprietary protocols for peer-to-peer
content. There is also an architectural debate about whether these solutions should reside in a
CSP's core network or closer to the subscriber-facing edge, with cloud radio access network
architectures, deep packet inspection traffic-shaping solutions and routers with CDN functions
supporting localized caching or optimization of content.
There is also a debate about business models, namely whether mobile CDNs should be managed
and owned by CSPs or outsourced to managed-service providers like Akamai. Differences between
CSPs may be seen, for example, in AT&T's outsourcing to Akamai, as compared with Verizon's
building its own mobile CDN with help from suppliers like Alcatel-Lucent. Additionally, Cisco's
Intercloud initiatives may be used to broker and peer between CDN providers.
User Advice: Mobile CSPs should carefully assess opportunities to forge partnerships with mobile
CDN providers or to build a mobile CDN in-house.
Building in-house requires you to overcome technical challenges and to meet business
development requirements to broker and peer with content owners and other CSPs.
Outsourcing to CDN providers is an option, but it increases transaction costs.
A newer, hybrid option that should be evaluated involves both in-house development and brokering
with external CDN providers.

Page 87 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Business Impact: Mobile CDNs will continue to grow in terms of the breadth and scope of services
they support. This will expand the range of application-fluent network services and facilitate
relationships between e-commerce partners, including advertisers.
A mobile CDN offloads origin servers via edge caching and load balancing. It also offers improved
latency due to closer proximity to users, as well as intelligent optimization techniques.
A mobile CDN can be used to distribute rich media such as audio and video as downloads or
streams, including live streams. It can also be used to deliver software packages and updates as
part of an electronic software delivery solution.
CDNs are common in fixed networks, and their use in mobile core networks is maturing as well.
Benefit Rating: High
Market Penetration: 1% to 5% of target audience
Maturity: Early mainstream
Sample Vendors: Akamai; Alcatel-Lucent; AT&T; Cisco; Ericsson; F5
Recommended Reading:
"Adopt Embedded Mobile Network Operators' CDNs for Enterprise Apps to Improve the User
Experience"

Rich Communication Suite


Analysis By: Deborah Kish
Definition: Rich Communication Suite (RCS/RCS-e) is a Global System for Mobile Communications
Association (GSMA) initiative aiming to develop specifications for Rich Communication Services.
These include "enhanced" instant messaging (IM), video calling and the ability to share documents
and photos simultaneously during calls and service discovery. All RCS services will become
available across any network and any device. The current specifications release from the GSMA is
RCS 5.1 v4, published in November 2013.
Position and Adoption Speed Justification: Since our last update, RCS has advanced in terms of
launches in the market. For example, Claro in Latin America launched RCS services under the joyn
brand in five markets in 2013, Sprint in the U.S. launched services via Jibe in October, and O2 in
Germany plans its launch under the joyn brand in 2014. Due to such launches and the advancement
of the specifications, the position of RCS has advanced on the Hype Cycle and is now at the
midpoint of the Slope of Enlightenment. A few communications service providers (CSPs) have also
launched similar services on their own in addition to joyn for example, Telefonica's TU Me and
Orange's Libon.
Additionally, we anticipate that the trends toward software-defined networking (SDN) and network
function virtualization (NFV) will advance RCS adoption, as CSPs move to provisioning advanced

Page 88 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

services and functions via software. Vendors at Mobile World Congress 2014 showcased RCS in
the cloud, which is likely to be the next adoption strategy for CSPs. We anticipate that CSPs with an
IP Multimedia Subsystem (IMS) as the foundational architecture and, moving toward, VoLTE will be
the main adopters.
User Advice: Take one of two approaches when deciding on RCS, as it can be provisioned in two
ways:
Option 1: Via SIP proxies, which is a more cost-effective approach for RCS-e. SIP proxies can
redirect requests through cloud-based platforms where RCS can be provisioned. With SIP,
implementation of devices is simplified and costs are lowered. However, SIP proxies need to be
treated with caution. It is solely Internet Protocol-based and not inherently secure, and lacks quality
of service. Also, it comes in many flavors that are needed for interoperation at a network level.
Therefore, it is wise to ensure configurations are optimal. Investments in session border controllers
may be necessary to avoid these issues.
Option 2: Via IMS implementation, which is optimal for RCS 5.1. RCS uses IMS to handle
underlying network features such as authenticating and charging for services. Carriers using IMSbased RCS can offer the following: stronger security; policy-based, end-to-end bandwidth
management via the combined customer reach of RCS carriers; and carrier-grade service resiliency
as an extension of IMS.
Ask about cloud-based RCS services as an alternative in order to potentially gain new revenue
streams and offer subscribers more advanced services.
Business Impact: It is hard to forecast the success of RCS because there are still unknown factors
concerning which will be the winning product (joyn or CSPs' own solutions) and user experience,
such as whether consumers will find intelligent address book features attractive on devices with
smaller screens, or whether the mass market will be interested in the more complex RCS
functionality. RCS also lacks a clear business case in relation to capitalization, besides protecting
existing infrastructure deployments and protecting the subscriber base. Proof points have yet to be
determined.
Benefit Rating: Moderate
Market Penetration: 5% to 20% of target audience
Maturity: Early mainstream
Sample Vendors: Acision; Alcatel-Lucent; Critical Path; Ericsson; Huawei; Mavenir
Recommended Reading:
"Emerging Technology Analysis: What Exactly Do Rich Communications Offer Communications
Service Providers?"
"Market Trends: How to Succeed With Mobile Apps on 4G Networks"

Page 89 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

"Magic Quadrant for Session Border Controllers"


"Market Trends: CSP Strategic Initiatives That Address OTT Market Challenges"
"Market Insight: Connected Digital Service Revenue Opportunities for Vendors Worldwide"

Mobile Advertising
Analysis By: Mike McGuire
Definition: Mobile advertising is advertising and other paid placements on mobile device screens,
most notably smartphones and tablets. Mobile advertising formats include search-, Web-, app-,
stream- and message-based placements.
Position and Adoption Speed Justification: Mobile advertising continues to gain momentum.
Gartner's mobile advertising forecast projects that the global mobile advertising market will top $42
billion in 2017, up from $13 billion in 2013. This increase is being driven by consumers' use of
smartphones and tablets to access information and entertainment content. Mobile app use is led by
the gaming, social and utility categories.
In the U.S. and other mature markets across Europe and Asia/Pacific, mobile advertising stands to
benefit from location-based ads, camera-activated search and bar code scanning, which are still
relatively nascent. Mobile payments, including ad-based transactions, and point-of-sale
redemptions, will make mobile advertising a more attractive channel.
In emerging markets, such as the fast-growing economies of Brazil, Russia, India and China, mobile
Internet access is leapfrogging the desktop and creating new advertising opportunities as consumer
goods and service companies look to grow these markets. Still, significant challenges exist:

Depressed prices Consumers are flocking to the mobile Internet in greater numbers and
faster than advertisers, resulting in a surplus of mobile advertising inventory, which drives down
ad prices. Plus, app developers are propping up mobile advertising revenue by buying ads to
promote their apps, which has had the effect of reinforcing perceptions of mobile having a lot of
less-than-optimum inventory.

Formats and standards Existing ad standards from organizations such as the Mobile
Marketing Association and the Interactive Advertising Bureau are widely considered to trail the
capabilities of more-advanced smartphones and media tablets, leading to the fragmentation of
device-specific platforms, which is driving up production costs for rich interactive ads.

Metrics and measurement The mobile metrics picture, considered to be a baseline


requirement for major media investments, has to overcome technical and organizational
complexities. As traditional metrics providers are being joined by mobile-specific mobile
advertising and mobile marketing metrics providers focusing on scaling a mix of first- and thirdparty data to target consumers, Gartner expects market forces to drive these providers to
dramatically improve both technical and organizational complexities.

Privacy and targeting Although mobile devices contain a wealth of targeting data,
advertisers and app developers have to exert care in leveraging those capabilities and avoid

Page 90 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

exacerbating consumer concerns about privacy. Technology platform vendors are taking care
as well, with Apple's iOS having its Safari Web browser default settings set to reject third-party
cookies (an essential component in the currently widespread practice of Web targeting).
Despite these unresolved issues, we expect overall strong growth rates for mobile advertising over
the coming years as the format ascends the plateau to become a fully productive marketing tool.
User Advice: Marketers considering mobile advertising must evaluate a number of variables to
determine how best to reach their target audience:

Brands and agencies must develop methods of evaluating the effectiveness of mobile
campaigns across various mobile channels to optimize the use of mobile media in the
marketing mix.

Local advertisers, in particular, must understand how to leverage the medium's ability to deliver
nearby traffic to their offline stores and venues in a privacy-friendly way.

Advertisers and agencies must continue to refine privacy policies and practices to address new
and potentially controversial targeting capabilities of mobile devices and systematically assess
regional variations and partner practices. This is particularly an issue in the EU, where the
ePrivacy Directive is set to disrupt many common tracking practices of websites.

Content providers, developers and publishers need to understand how to incorporate elements
such as social features, maps and video into applications that will attract users and advertisers.

Communications service providers and manufacturers need to be decisive about their intended
roles in mobile advertising and acknowledge that, with few exceptions, success will require
strong partnerships and strategic acquisitions to quickly establish key roles in end-to-end
solutions that can deliver efficiency and scale to advertisers.

Business Impact: Mobile advertising will continue to siphon most of its revenue from media such
as print and outdoor categories, although improvements in efficiency and effectiveness will prevent
overall spending from being a zero-sum game. Mobile's short-term impact on television will be
minimal, although the overall effect of mobile will be to emphasize direct, targeted, pull-style
interactions that may accompany a long-term reduction in the share of marketing resources
directed at general media advertising.
Google and Facebook are the biggest beneficiaries of mobile advertising. Apple's much-hyped iAd
platform is moving in fits and starts. Publishers, content providers and application developers
appear to have a similar problem on mobile that has challenged their efforts online; namely, the
"long tail" fragmentation of audiences and usage that makes it difficult for all but a few providers to
achieve the scale necessary to attract substantial ad revenue.
Benefit Rating: High
Market Penetration: 20% to 50% of target audience
Maturity: Early mainstream

Page 91 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Sample Vendors: Apple; Conversant; Google; Microsoft; Millennial Media; Velti; Yahoo
Recommended Reading:
"Forecast: Mobile Advertising, Worldwide, 2009-2016"
"Competitive Landscape: Location-Based Advertising and Marketing"

Enterprise Architecture
Analysis By: Brian Burke; Philip Allega
Definition: Enterprise architecture (EA) is a discipline for proactively and holistically leading
enterprise responses to disruptive forces by identifying and analyzing the execution of change
toward desired business vision and outcomes. EA delivers value by presenting business and IT
leaders with signature-ready recommendations for adjusting policies and projects to achieve target
business outcomes that capitalize on relevant business disruptions. EA is used to steer decision
making toward the evolution of the future-state architecture.
Position and Adoption Speed Justification: Leading EA programs continue to push the overall EA
discipline further toward the Plateau of Productivity in the next two to five years. The overall
maturity of EA by practitioners is improving, continuing to progress on the Slope of Enlightenment
through 2014. Leading EA teams balance focus on transformational change while maintaining
existing services.
Leading EA practitioners are business-outcome-driven, and evolve their EA programs with a focus
on EA value realization rather than on the creation of artifacts for their own sake. EA programs have
clearly shifted their positioning away from an inward view of IT systems alone to a broader vision of
EA that guides their organizations and beyond to realizing their enterprise strategies and goals.
Continued movement up the Slope of Enlightenment in 2014 is driven by EA practitioners who are
leading the evolution of EA in five key ways:

Focusing on business transformation

Integrating EA with business

Defining business outcome performance metrics

Working closely with business executives

Investing in EA (95% of leading programs invest 10% or more of their IT budgets on EA)

While EA is an established discipline in the majority of organizations, there continues to be large


numbers of organizations that are either starting or restarting EA programs. Overall, EA maturity on
Gartner's ITScore is 2.55 out of 5.00, leaving significant room for improvement. We believe a large
number of EA practitioners are shifting focus to a more pragmatic business-outcome-driven EA
approach, but, as a general practice, EA remains on the Slope of Enlightenment. As a larger number
of EA practitioners become focused on delivering business outcomes, the EA practice will evolve to
reach the Plateau of Productivity within the next two to five years. While it may seem to be a long
Page 92 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

transition period, change comes slowly to the practice of EA, demonstrated by the fact that it has
taken more than 25 years for EA programs to progress to the level of impact we are seeing today.
User Advice: Enterprise architects are making the switch from process-driven EA approaches to a
business-outcome-driven EA. As noted in "Stage Planning a Business Outcome-Driven Enterprise
Architecture," EA is a journey, not a destination. Each stage of the journey must be planned
pragmatically, and be focused on a limited set of target business outcomes. Each iteration must
deliver a highly valuable set of recommendations for business managers to execute.
Thinking in terms of stage planning guides, EA practitioners must think strategically to:

Align EA to the highest-priority business outcomes.

Streamline EA development to only create the deliverables that directly address the highestpriority business outcomes.

Define a process to create those deliverables in a resource-efficient way.

Going forward, enterprise architects will be challenged to address an assortment of disruptive


technologies that are emerging today such as smart machines, the Internet of Things (IoT), 3D
printing, big data and gamification (see "Digital Business: 10 Ways Technology Will Disrupt Existing
Business Practices"). Enterprise architects will need a renewed focus on disruptive technologies,
and leaders will develop the role of vanguard EA (see "Predicts 2014: Enterprise Architect Role
Headed for Dramatic Change").
Business Impact: Enterprise architects who focus on the most significant business disruptions and
outcomes that the organization faces, as well as on the deliverables that will guide the organization
through the required change are able to demonstrate the highest business impact and value.
High-value EA organizations' scope of EA change includes business, information, solutions and
technology, and:

Addresses opportunities for strategic and tactical change to enable the competitive positioning
of the business in the future state.

Identifies deficiencies in the current-state portfolio that must be resolved.

Provides a set of constraints on projects to minimize complexity.

To support these activities and to address stakeholder issues, leading enterprise architects:

Engage senior business and IT leadership to understand the goals of the organization.

Clearly communicate business strategies, and articulate dependencies and requirements, to


business leaders.

Measure and deliver the value of EA, based on enabling business outcomes rather than on the
internal tasks of developing EA artifacts.

Benefit Rating: High

Page 93 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Market Penetration: More than 50% of target audience


Maturity: Early mainstream
Recommended Reading:
"Stage Planning a Business Outcome-Driven Enterprise Architecture"
"Define the Business Outcome Statement to Guide Enterprise Architecture Efforts"
"Predicts 2014: Enterprise Architect Role Headed for Dramatic Change"

IP Service Assurance
Analysis By: Martina Kurth
Definition: Internet Protocol (IP) service assurance encompasses the end-to-end software and
processes that provide insight into the quality of IP services. The low-latency constructs associated
with many IP services have caused concern over their quality, which has resulted in a need for
specialized solutions. Service, performance and fault management of end-customer IP services is of
strategic importance for communications service provider (CSP) operations and requires a
paradigm shift.
Position and Adoption Speed Justification: The position of IP service assurance on the Hype
Cycle reflects the hype caused by CSPs' concerns about customer-perceived quality of service.
This is also related to the increasing complexity of service bundles due to the acceleration of Long
Term Evolution and all-IP technology. As competition for next-generation IP services intensifies and
connectivity becomes more of a commodity, CSPs will have to turn their attention to improving the
customer experience for these services in order to differentiate themselves.
We see a paradigm shift from resource-centric to customer-centric service assurance with
corresponding analytics, policy, charging and third-party exposure. In the long term, softwaredefined networking (SDN) and network function virtualization (NFV) technology will allow for more
dynamic application orchestration and real-time, customer-oriented IP service assurance.
Service assurance systems will remain the core operational catalysts for CSPs' activities to reduce
churn, manage SLAs and ensure an adequate level of customer-perceived quality of experience.
For CSPs to remain competitive, superior availability and quality of content delivery in increasingly
complex multiservice IP environments is crucial. IP service assurance network monitoring as part of
the control plane is also becoming a focus area for improving the resilience of IP connections.
User Advice: Ensure the concurrent utilization of IP service management data across different
network, operational and marketing departmental groups. IP service assurance should evolve as a
customer experience tool to be used collaboratively across the organization to measure and act on
IP service quality insights.
Select service assurance solutions that support multiple technologies, services and internal user
groups. Priority should be given to systems that provide a granular view of the customer in terms of

Page 94 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

service usage, profile and problems in the network related to customer service. Real-time
capabilities are becoming vital to manage SLAs and customer expectations proactively.
Competitive differentiation can be achieved through strong aggregation and correlation capabilities
for relevant key performance indicators to enable holistic judgments to be made about the triggers
of IP service-level grades.
It is also essential that service assurance software architecture supports the modeling and creation
of IP service quality to anticipate business-level activities for IP services such as video on demand,
IPTV and IP VPNs.
Move toward a holistic service assurance platform that integrates network, service, fault and
customer management functionality. Moreover, CSPs who use multitechnology and multidomain
service assurance will be able to overcome the challenge of managing GSM, 3G and 4G network
and service resources simultaneously. Also, service assurance sourcing decisions should take into
account the increasing convergence of service assurance with network resource, asset
management and service delivery platforms to ensure operational process agility.
Endeavor to work with a vendor that enables a congruent network and service topology view across
network and IT, as well as IP and transport.
Evaluate SDN/NFV service orchestration capabilities related to IP functions to ensure the end-toend configurability and tuneability of networks.
Business Impact: CSPs will invest heavily in IP service assurance solutions as they migrate to all-IP
networks. Investments in solutions that assure the customer-perceived quality of IP services are
ranked at the top of CSPs' investment priority lists worldwide. These solutions endeavor to manage
the customer-perceived quality of IP services and, as such, help to achieve competitive
differentiation and reduce churn.
Benefit Rating: High
Market Penetration: 20% to 50% of target audience
Maturity: Early mainstream
Sample Vendors: Clarity International; Comarch; HP; IBM; InfoVista; Monolith Software; Mycom;
Nokia; Tektronix; Teoco
Recommended Reading:
"Magic Quadrant for Operations Support Systems"
"SDN and NFV Offer CSPs a New Way to Do Business"

Page 95 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Entering the Plateau


Mobile DPI
Analysis By: Akshay K. Sharma
Definition: Mobile deep packet inspection (DPI) is a technique used to monitor mobile data traffic.
As data services become more important than voice for revenue generation and networks are
upgraded to Long Term Evolution (LTE) and become Internet Protocol (IP) end to end the ability
to perform traffic shaping, and perhaps blocking, becomes important. Mobile DPI can be a standalone network element, or part of existing network elements.
Position and Adoption Speed Justification: Mobile DPI has received a lot of negative publicity
within net neutrality debates, as a means to determine which traffic should be shaped. However,
mobile DPI has now become a mainstream technique to determine how over-the-top traffic is
managed. It can be a proactive method for communications service providers (CSPs) to achieve
session awareness, prioritize sessions like emergency calls, and provide regulatory compliance for
lawful interception with newer video optimization solutions.
Mobile DPI was in the trough a few years ago when net neutrality advocates considered it a
negative tool, but now it is considered a necessary technique for CSPs to become service-aware for
tiered services and policy enforcement. Now that the row over net neutrality has subsided
somewhat in the U.S., traffic shaping is now considered a useful technique.
Mobile DPI can also be a way to provide regulatory features like lawful intercept, but privacy needs
to be maintained for mainstream users. For lawful intercept, peer-to-peer traffic is usually difficult to
capture, and mobile DPI solutions from vendors like ipoque have the ability to identify, filter and
stream specific communication flows based on either an application, or a predefined set of criteria
(source/destination address, packet type or pattern type, for example).
User Advice: CSPs are deploying mobile DPI solutions for traffic shaping, providing tiered services
and lawful intercept as needed for regulatory compliance. While appliance-based solutions are the
norm, mobile DPI "blades" have also appeared within mobile packet core solutions, and CSPs
should monitor the progress of virtual DPI solutions in cloud-based offerings.
Business Impact: End-to-end quality of service is critical for voice over IP voice quality, as well as
for video quality. For mission-critical applications, mobile DPI can be a way to filter noncritical
traffic, as well as prioritizing critical traffic, and with policy management systems can be an enabler
for tiered services.
Benefit Rating: High
Market Penetration: 20% to 50% of target audience
Maturity: Early mainstream
Sample Vendors: Alcatel-Lucent; Allot Communications; Cisco; ipoque; Procera Networks;
Sandvine
Page 96 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Recommended Reading:
"Dataquest Insight: Mobile DPI; How Mobile Deep Packet Inspection Became Deep Pocket
Inspection"

Appendixes

Page 97 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Figure 3. Hype Cycle for the Telecommunications Industry, 2013


Cloud Management Platforms
Communications Service Providers as Cloud Services Brokerages
Social Network
Hybrid Cloud Computing
Analysis
Context-Enriched Services
Network Functions Virtualization
Network Intelligence
Mobile Cloud
Personal Cloud
Big Data
Behavioral Economics
Hybrid Mobile Development
OpenFlow
Convergent Communications
Business Impact Analysis
Advertising Platforms (CCAPs)
Browser Client OS
Context Delivery Architecture
Web Analytics
Master Data Management
Mobile Social Networks
Bring-Your-Own-Device Services
Mobile CDN
Mobile Virtual Worlds
Mobile Data Protection
Mobile Unified Communications
Web Real-Time Communications
Telecom Analytics
OSS/BSS Customer
Machine-to-Machine
Mobile Device Management
Real-Time
Experience Management
Communication Services
Infrastructure
Software as a Service (SaaS)
Social IT Management
Cloud UC
Open-Source Communications
(UCaaS)
Open-Source Telecom
Service-Oriented
Operations Management Systems
Architecture in
Web-Oriented Architecture
OneAPI
OSS/BSS and SDP
Rich Communication Suite
Cloud MDM Hub Services
Cloud Computing
Infrastructure as a Service (IaaS)
Next-Generation Service
Web Experience Analytics
Delivery Platforms
Open-Source Virtualization Platforms
End-User Experience
Content Integration
Monitoring
Cloud/Web Platforms
As of July 2013

expectations

Innovation
Trigger

Peak of
Inflated
Expectations

Trough of
Disillusionment

Slope of Enlightenment

Plateau of
Productivity

time
Plateau will be reached in:
less than 2 years

2 to 5 years

5 to 10 years

more than 10 years

obsolete
before plateau

Source: Gartner (July 2013)

Page 98 of 102

Gartner, Inc. | G00260996

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Hype Cycle Phases, Benefit Ratings and Maturity Levels


Table 1. Hype Cycle Phases
Phase

Definition

Innovation Trigger

A breakthrough, public demonstration, product launch or other event generates significant


press and industry interest.

Peak of Inflated
Expectations

During this phase of overenthusiasm and unrealistic projections, a flurry of well-publicized


activity by technology leaders results in some successes, but more failures, as the
technology is pushed to its limits. The only enterprises making money are conference
organizers and magazine publishers.

Trough of
Disillusionment

Because the technology does not live up to its overinflated expectations, it rapidly becomes
unfashionable. Media interest wanes, except for a few cautionary tales.

Slope of
Enlightenment

Focused experimentation and solid hard work by an increasingly diverse range of


organizations lead to a true understanding of the technology's applicability, risks and
benefits. Commercial off-the-shelf methodologies and tools ease the development process.

Plateau of Productivity

The real-world benefits of the technology are demonstrated and accepted. Tools and
methodologies are increasingly stable as they enter their second and third generations.
Growing numbers of organizations feel comfortable with the reduced level of risk; the rapid
growth phase of adoption begins. Approximately 20% of the technology's target audience
has adopted or is adopting the technology as it enters this phase.

Years to Mainstream
Adoption

The time required for the technology to reach the Plateau of Productivity.

Source: Gartner (August 2014)

Table 2. Benefit Ratings


Benefit Rating

Definition

Transformational

Enables new ways of doing business across industries that will result in major shifts in industry
dynamics

High

Enables new ways of performing horizontal or vertical processes that will result in significantly
increased revenue or cost savings for an enterprise

Moderate

Provides incremental improvements to established processes that will result in increased revenue
or cost savings for an enterprise

Low

Slightly improves processes (for example, improved user experience) that will be difficult to
translate into increased revenue or cost savings

Source: Gartner (August 2014)

Page 99 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

Table 3. Maturity Levels


Maturity Level

Status

Embryonic

In labs

None

Emerging

Commercialization by vendors

First generation

Pilots and deployments by industry


leaders

High price

Much customization

Maturing technology capabilities and


process understanding

Second generation

Less customization

Uptake beyond early adopters

Proven technology

Third generation

Vendors, technology and adoption


rapidly evolving

More out of box

Methodologies

Several dominant vendors

Maintenance revenue focus

Used/resale market only

Adolescent

Early
mainstream

Products/Vendors

Mature
mainstream

Robust technology

Not much evolution in vendors or


technology

Legacy

Not appropriate for new developments

Cost of migration constrains replacement

Rarely used

Obsolete

Source: Gartner (August 2014)

Gartner Recommended Reading


Some documents may not be available as part of your current Gartner subscription.
"Understanding Gartner's Hype Cycles"
"Hype Cycle for Wireless Networking Infrastructure, 2014"
"Hype Cycle for Communications Service Provider Infrastructure, 2014"
"Hype Cycle for Communications Service Provider Operations, 2014"
"Hype Cycle for Consumer Services and Mobile Applications, 2014"

Page 100 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

More on This Topic


This is part of an in-depth collection of research. See the collection:

Gartner's Hype Cycle Special Report for 2014

Page 101 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

GARTNER HEADQUARTERS
Corporate Headquarters
56 Top Gallant Road
Stamford, CT 06902-7700
USA
+1 203 964 0096
Regional Headquarters
AUSTRALIA
BRAZIL
JAPAN
UNITED KINGDOM

For a complete list of worldwide locations,


visit http://www.gartner.com/technology/about.jsp

2014 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its affiliates. This
publication may not be reproduced or distributed in any form without Gartners prior written permission. If you are authorized to access
this publication, your use of it is subject to the Usage Guidelines for Gartner Services posted on gartner.com. The information contained
in this publication has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy,
completeness or adequacy of such information and shall have no liability for errors, omissions or inadequacies in such information. This
publication consists of the opinions of Gartners research organization and should not be construed as statements of fact. The opinions
expressed herein are subject to change without notice. Although Gartner research may include a discussion of related legal issues,
Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner is a public company,
and its shareholders may include firms and funds that have financial interests in entities covered in Gartner research. Gartners Board of
Directors may include senior managers of these firms or funds. Gartner research is produced independently by its research organization
without input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartner
research, see Guiding Principles on Independence and Objectivity.

Page 102 of 102

Gartner, Inc. | G00260996


This research note is restricted to the personal use of yolanda.robles@inegi.org.mx

You might also like