You are on page 1of 81

Volume 8 Number 5

T
A
C
.
E
E N OM

RAZI
O
AG
M
MAL

T
ITIC
E
R
C
GION
S

MIS

September/October 2015

FROM FACILITY DESIGN TO INFRASTRUCTURE MANAGEMENT,


DATA CENTER SOFTWARE HAS COME OF AGE.

Starting on page 16
WHATS INSIDE
Are There Gophers In Your
Data Center?
See page 6

Dont Let TAPs Handcuff


Your Network
See page 36

www.missioncriticalmagazine.com

The Importance of Codes


and Standards
See page 46

Dont be left in the dark!


Doesnt it make sense to leave maintenance of your UPS to the people that
know it best? The Mitsubishi Service group offers a variety of service options
that will keep your systems running at peak performance 24x7x365.
Tech support hotline:
1-800-887-7830

Available Services:
Start-up/Installation
Preventive Maintenance
Extended Warranty
Thermographic Imaging
Factory and On-site Testing
Load Bank Testing
Emergency Services

Mitsubishi Service...
Unequaled. Unsurpassed. Uninterrupted.
E

UP

UP

UP

CE

CE

CE

NINTD...UNINTDE...UNINTE F MIND...UFNMIND...UFNMIND...UN CE OF MIANCE OF MAINCE OF MIN D PEACE OD PEACE OD PEACE ORUPTED PREAUPTED PREUAPTED PEA TERRUPTETERRUPNTETERRUPTE..UNINTER.R.UNINTER..RUNINTERR IND...UNININD...UNININD...
A
E
.D
N
E
NE
MT
F
MT
M
IN D NI D
IN
R
D.
D.
MI R UF MI R U CE OR U CE O I T CE O I T PE I T D PE D .. D PE D .. UPTED .. U UPT F M UPT F M I TER M I TER
TE
E IND E IN
UN
R
E. U
T OF T OF R OF
T IN
. UN
E A C E O P T E D P EPAT E D P EPAT E D P E AE R R U P TE RD R U P TEER R U P T EU N I N T E RURN I N T E RNR I N T E R RI N D . . . U N INND . . . U NNI ND . . . U N I NO F M I N DO..F. M I N DO.F. M I N D.. A C E O F MA C E O F AMC E O F ME D P E A CEED P E A CEED P E A C ER U P T E DRP

F
O
LETS
B
TA
EN
T
N
O
C

September/October 2015 | Volume 8, Number 5

COLUMNS
5

CRITICAL THOUGHTS
An Embarrassment Of Riches
Our September/October issue is overflowing.
By Caroline Fritz

HOT AISLE INSIGHT


Are There Gophers In Your Data
Center?
Small holes can cost big dollars.
By Julius Neudorfer

16

10 SUSTAINABLE OPERATIONS
Characteristics Of A Culture Of
Excellence
Doing everything right every time.
By Terry L. Rodgers, CPE, CPMP

12 SECURITY PERSPECTIVES
What We Forget About Server
Virtualization

FEATURES
COVER STORY
16 CFD And Mission Critical Facilities

Practical use of computational fluid dynamics in mission critical


facilities.
By Dr. Reza Ghias

A refresher in virtualization security.


By Mav Turner

14 ON TARGET
Disaster Recovery And The CEO
When it comes to disaster recovery,
everything is sacred.
Paul Schlattman

24 The Right Time For DCIM

Real-time or near-time? Find out what is right for your facility.


By Matt Lane

DEPARTMENTS

FEATURES

63

Events

30 Motors For Mission Critical Facilities

63

News

64

Products

65

Heard on the Internet

New design for permanent magnet motors uniquely delivers ultrahigh efficiency at low speeds.
By Andrew T. Holden, P.E.

36 Dont Let TAPs Handcuff Your Network

Consider an optical tap to access critical network data.


By Jennifer Cline, RCDD and Brian Rhoney

42 Changing The Face Of Facility Management

The Internet of Things (IoT) is coming to data center infrastructures


near you.
By Bhavesh Patel

46 The Importance Of Codes And Standards


Its a matter of safety.
By Chris Crosby

50 The Payoff Of Preventive Maintenance


Get a little peace of mind.
By Kyle Tessmer

54 The Lithium-Ion UPS Your Ally In A DC Disaster


Put your UPS in the eye of the storm.
By Emilie Stone

4|

Mission Critical

SEPTEMBER/OCTOBER 2015

FEATURES continued
56 IT Agility: Making Better Use
Of Power Monitoring Data
Designed for the Internet of
Things, todays data center
hardware provides valuable
feedback that enables all-software
instrumentation for automation.
By Jeff Klaus

60 Ultraviolet Energy And The


Data Center
UV is an important addition to the
data center cooling equation.
by Forrest Fencl

LS
A
C
I HT
T
I
CROUG
TH

SUBSCRIPTION INFORMATION
For subscription information or service,
please contact Customer Service at: (847) 763-9534
or fax: (847) 763-9538 or MC@halldata.com

GENERAL INFORMATION

Caroline Fritz
is the editor of
Mission Critical. Follow us on
Twitter at @MCritical. And join us for great
industry discussion at Mission Criticals Open Forum
Discussion Group on LinkedIn.

2401 W. Big Beaver Rd., Suite 700


Troy, MI 48084-3333
(248) 362-3700 Fax: (248) 362-0317
www.missioncriticalmagazine.com

GROUP PUBLISHER

Peter E. Moran moranp@bnpmedia.com


(914) 882-7033 Fax: (248) 502-1052

EDITORIAL

Caroline Fritz, Editor


fritzc@bnpmedia.com (303) 250-2781
James Siegel, Managing Editor
siegelj@bnpmedia.com (415) 503-0455

By Caroline Fritz

ADVERTISING SALES
Russell Barone Jr Midwest and West Coast Advertising Manager
baroner@bnpmedia.com (219) 464-4464 Fax: (248) 502-1085
Vic Burriss East Coast Advertising Manager
burrissv@bnpmedia.com (610) 436 4220 ext 8523
Fax: (248) 502 2078

An Embarrassment
Of Riches

ADVERTISING PRODUCTION & EDITORIAL


DESIGN
Kelly Southard, Production Manager
Jake Needham, Sr. Art Director

Our September/October issue


is overflowing.

MARKETING

e have a great, jam-packed issue for you this month. First, we have two stories on our cover topic, data center software: Dr. Reza Ghias of Southland
Industries writes about using computational fluid dynamics to design data
centers and Matt Lane of Geist writes on new ways to use data from data center infrastructure management (DCIM) systems.
But that is just the tip of the iceberg.
Andrew Holden of NovaTorque writes about motors for mission critical facilities; Jennifer Cline and Brian Rhoney of Corning Optical Communications weigh in on optical
taps; Bhavesh Patel of ASCO Emerson Network Power writes on the Internet of Things
and facility management; Mission Critical Unconventional Wisdom columnist Chris
Crosby forgoes his column this month to write on the importance of codes and standards;
Kyle Tessmer of Mitsubishi Electric Power writes on the power of preventive maintenance; Emilie Stone of Methode pens an article on lithium-ion UPS and disaster recovery;
Jeff Klaus of Intel examines how power monitoring data can help streamline your facility;
and Forrest Fencl writes about using ultraviolet energy to help cool data centers. In addition we have our regular slate of columnists on hand as well as a new product roundup
and the latest on news and events.
As you can see I wasnt exaggerating. September/October is huge!

Kevin Hackney, Marketing Manager


hackneyk@bnpmedia.com (248) 786-1642
Steven Wassel, Trade Show Coordinator
wassels@bnpmedia.com (248)-786-1210
Jill L. DeVries, Editorial Reprint Sales
devriesj@bnpmedia.com (248) 244-1726
Kevin Collopy, Senior Accout Manager
kevin.collopy@infogroup.com 845-731-2684

AUDIENCE DEVELOPMENT

Hayat Ali-Ghoneim, Audience Marketing Sr. Specialist


Devon Bono, Multimedia Specialist
Wafaa S. Kashat, Ausience Audit/Postal Specialist

LIST RENTAL

Postal contact: Kevin Collopy at 402-836-6265


kevin.collopy@infogroup.com
Email contact: Michael Costantino at 402-836-6266
michael.costantino@infogroup.com
Single Copy Sales: Ann Kalb at 248-244-6499
kalbr@bnpmedia.com

DIRECTORIES

Erin Mygal, Directory Development Manager


mygale@bnpmedia.com (248) 786-1684

CORPORATE DIRECTORS
See the complete list of BNP Media corporate directors
www.missioncriticalmagazine.com.

INDUSTRY ALLIES

AFCOM Members

Connect With Us
Mission Critical gives you many ways to stay in touch, whether it is on Twitter, Facebook,
LinkedIn, or Google+, we update our content daily to keep you in the know. And download our Mission Critical app at http://www.missioncriticalmagazine.com/apps to take us
with you throughout your busy day.

Caroline Fritz
Editor

SEPTEMBER/OCTOBER 2015

INTERNATIONAL

The end-to-end reliability


reliabilit forum
forum.

MISSION CRITICAL, (ISSN: Print 1947-1521 and Digital 1947-153X) is published 6 times annually, bi-monthly, by BNP Media, Inc., 2401 W. Big Beaver
Rd., Suite 700, Troy, MI 48084-3333. Telephone: (248) 362-3700, Fax: (248)
362-0317. No charge for subscriptions to qualified individuals.
Annual rate for subscriptions to nonqualified individuals in the U.S.A.:
$123.00 USD. Annual rate for subscriptions to nonqualified individuals in
Canada: $160.00 USD (includes GST & postage); all other countries: $178.00
(intl mail) payable in U.S. funds. Periodicals Postage Paid at Troy, MI and at
additional mailing offices.
Postmaster: send address changes to: MISSION CRITICAL, P.O. Box 2145,
Skokie, IL 60076. Printed in the U.S.A. Copyright 2015, by BNP Media, Inc.
All rights reserved. The contents of this publication may not be reproduced
in whole or in part without the consent of the publisher. The publisher is
not responsible for product claims and representations.
Canada Post: Publications Mail Agreement #40612608. GST account:
131263923. Send returns (Canada) to IMEX Global Solutions, P.O. Box 25542,
London, ON, N6C 6B2.
Change of address: Send old address label along with new address to
MISSION CRITICAL, PO Box 2145, Skokie, IL 60076.
For single copies or back issues: contact Ann Kalb at
(248) 244-6499 or KalbR@bnpmedia.com.

www.missioncriticalmagazine.com

|5

Click this ad to renew

E
L
ISHT
A
TSIG
O
H IN
Julius
Neudorfer is the CTO and
founder of North American Access
Technologies, Inc. (NAAT). Read his complete
archive at
www.missioncriticalmagazine.com/juliusneudorfer.

By Julius Neudorfer

Are There Gophers In


Your Data Center?
Small holes can cost big dollars.

have been pontificating about cooling system energy efficiency and water usage
lately. In my last column, I discovered that a single hole on a golf course can use
2.8 million gallons of water a year just to stay green. Since I am not a golfer, my
impression of golf courses is based on the 1980s comedy Caddyshack. The film was
based on a golf course that had a clever gopher that liked to dig holes, despite best (or
worst) efforts of the groundskeeper. This crafty creature ultimately costs the club money
and lost customers. While gophers are not usually a problem in most data centers, it turns
out that a hole in the raised floor for cabling can be quite costly as well.
So lets examine the issue of raised floors and cable openings, since it seems the world
will continue to use and build traditional raised floor data centers, despite all the paradigm
shifts in the data center design from Google, Facebook, Open Compute, Yahoos Chicken
Coop, etc. The classic raised floor data center with underfloor cabling may be slowly fading, but it is far from gone. Here it is, approximately 20 years after the hot aisle/cold aisle
concept was introduced and yet there are still many basic airflow issues that continue to
plague these data centers.
The classic raised floor design serves two primary purposes; a supply air plenum
and a place to hide the power and network cables. On face value the design is relatively
straightforward; just have downflow cooling units (CRAC/CRAH) blow cold supply air
into an underfloor plenum and distribute it through perforated tiles of floor grates in the
cold aisle so it is available to be drawn into the front intakes of the IT equipment in the
cabinets. Then the hot exhaust air IT equipment in back in the cabinets blow into the hot
aisle and (perhaps magically) find its way back to the return of the cooling units.
If only it were that simple. In actual practice, a myriad of issues seem to get in the way
of this designs simple concept, especially when applied to higher density cabinets. These
generally fall into two categories; wasted cold bypass airflow and hot recirculated
airflow. Lets first look at the definition of bypass air; any cold supply airflow that did
not get the intake of IT equipment. Bypass airflow occurs in any opening in the raised
floor, such as cable cutouts and miscellaneous leakage areas, spaces along the perimeter
where it meets the walls and other openings like PDU cabinets or other equipment, are
common examples
Data center managers have begun to pay more attention to this and are trying to address
it wherever possible. Proper sealing of the gaps where the raised floor meets the walls
is a good start. The other area, and the worst offender, is the cable cut-out under every

6|

Mission Critical

SEPTEMBER/OCTOBER 2015

TECHNICAL ADVISORY BOARD


Robert Aldrich
Hitachi Data
Systems

Bill Mazzetti
Rosendin Electric

Christian Belady
Microsoft

John Musilli
Intel Corp

Rudy Bergthold,
P.E.
Cupertino
Electric, Inc.

Bruce Myatt, P.E.


Critical Facilities
Round Table, The Data
Centers, LLC

Dennis Cronin
SteelOrca

Russ B. Mykytyn
Skae Power Solutions

Peter Crook
Upsite
Technologies

Dean Nelson
eBay

Peter M. Curtis
President
Power
Management
Concepts

Glen Neville
Deutsche Bank

Kevin Dickens
JacobsKlingStubbins

Julius Neudorfer
North American
Access Technologies,
Inc.

Peter Funk Jr.


Funk and Zeifer

Thomas E. Reed, P.E.


Jacobs-KlingStubbins

Scott Good,
Uptime Institute

David Schirmacher
Digital Realty Trust

Peter Gross,
Bloom Energy

Jim Smith,
Digital Realty Trust

Cyrus Izzo
Syska Hennessy
Group

Robert F. Sullivan

Jonathan Koomey
Stanford University

Henry Wong
Intel Corp.

Keith Lane, P.E.


Lane Coburn &
Associates, LLC

Stephen Worn
Data Center Dynamics,
OT Partners

Friend us on Facebook
www.facebook.com/
MissionCritical

Look for us on Twitter

@mcritical

Discover ebm-papst.
Energy-saving system solutions for IT hardware
cooling at discover.ebmpapst.com/itcooling

Susanne Lohmann, project engineer at ebm-papst

At ebm-papst, we develop fans for cooling hardware. They are particularly


powerful, yet remain extremely quiet, save energy and are entirely
maintenance-free. Enabling even the IT manager to keep a cool head.
You cant see it. But you can feel it!
For our complete product line, visit ebmpapst.us

LE
S
I
AGHT
T
I
HO
INS

Continued from page 6

cabinet. Moreover, these openings range in size,


from a small 4- x 4-in. notch at the edge of a tile, to half
or even a whole tile! If left open, a substantial portion of the supply air
becomes bypass air. This results in several problems including lower
static pressure, which lowers the airflow where it belongs through the
perforated tiles or grills, causing wasted fan energy. In addition, when
the bypass air mixes with the warmed IT exhaust air it lowers the return
air temperature to the cooling unit, lowering its cooling capacity and
energy efficiency. To address bypass air, the brushed style cable grommet was developed over a decade ago. However, only more recently
has it moved toward more widespread use. Yet many data centers still
have not addressed this issue.
As for recirculation, it is when the warm IT exhaust air re-enters
the IT equipment (either from the same server or any other server),
which typically results in hot spots. This is a more complex problem
to solve, but the first line of defense is installing blanking plates in the
racks to minimize back-to-front recirculation within the same cabinet.
On a broader scale, aisle containment systems prevent over the top,
end-of-aisle, and aisle-to-aisle recirculation, as well as bypass air, but
are more costly and more difficult to retrofit in existing facilities.
This past July an ASHRAE white paper reviewed this issue (Plenum-Leakage Bypass Airflow in Raised-Floor Data Centers by James
R. Fink, P.E.). However, while the white paper discussed bypass air
and related issues as a general problem, it cited cable openings as the
majority cause of floor related bypass airflow. To quantify the issue,
a specially constructed test fixture was created that allowed accurate
measurements of leakage. In addition, to simulate real world conditions, they used seven test conditions that varied the number of network
and power cables, as well as their positioning in the collar.
The overall finding of the paper noted that 50% or more of underfloor supply air leakage is typically wasted by those cable cutouts without any form of bypass air control. It also took the relatively unusual
step of analyzing and comparing different brands of cable grommets
with brush collars. While visually the brush collars appeared generally
similar, a study showed a huge variation between the best device and
the worst performing device. In order to make accurate comparisons,
the author created a sealed test chamber which used a controlled static
pressure of 0.05 in. w.c. (12.5 Pa) to simulate the typical underfloor
pressure. However, in practice this will vary and more recently higher
pressures are being used to achieve greater airflow rates through perforated floor tiles and grates to try to meet the challenge of higher density
racks. In those cases, waste from cable cutouts and the savings from the
brush collar grommet is even greater.

THE BOTTOM LINE


So how much is that hole for each cable opening costing? According
the report it is an astounding $480 per year (compared to the raw opening without any grommet). The whitepaper used a cost of $0.13 per
kWh (averaged over 10 years) as a basis to calculate projected savings.
The paper stated, Installation of grommets to seal cable cut-out
holes is nothing short of an outstanding investment. The relative per-

8|

Mission Critical

SEPTEMBER/OCTOBER 2015

FIGURE 1. How much money could a simple brush grommet


save? Photo Courtesy of Upsite Technologies

FIGURE 2. How much money does this cable cutout waste?


Photo Courtesy of Upsite Technologies

formance differential among several popular tested grommets is significant and worthy of consideration. Moreover it noted that the vastly differing performance of various brands had a huge impact on projected
savings Between the best and the worst-performing grommets, there
is a significant difference in ten-year savings. In the hypothetical 1MW
data center with 200 equipment racks and one grommet per rack this
difference is nominally $72,000. It summarized the highly detailed
results declaring given the almost negligible cost of grommets
relative to obtainable savings, there is little reason not to choose the
best-performing grommet.
There have been many methods to save energy and improve cooling performance that have been developed over the last decade. Some
are simple and cost nothing to implement, such as raising the supply
air temperature, while others may requires some cost and effort and
require economic justification. In todays highly competitive, efficiency
driven data center market, an obvious, but overlooked problem that can
be easily addressed with quick ROI is a rare find. The savings cited in
the ASHRAE whitepaper are very clear. Moreover, brush style grommets are easily installed and are operationally non-intrusive and also
can be implemented over time, as resources permit. So if you have not
already done so, start sealing those cable cutout openings using the
grommets with the best performance, and in case there seems to be
some new, odd looking holes, better check for gophers.

REPRINTS OF THIS ARTICLE are available by contacting Jill


DeVries at devriesj@bnpmedia.com or at 248-244-1726.

Your data center.


Your power needs.
Your way.
For over 25 years, our exible and scalable busway
systems for data centers have been proven to
deliver a lower cost of ownership, while allowing
easier installation with faster expansions and
additions. Yet STARLINE Track Busway is designed
to be completely customizable to meet your specic
needs, with a wide variety of specialized plug-in
units, multiple feeds, and monitoring options.
To learn more about the choices that STARLINE
Track Busway gives you, visit StarlinePower.com
or call us at +1 724-597-7800.

LE
B
S
A
N
N
I TIO
A
T RA
S
E
SU
P
O

Terry L. Rodgers,
CPE, CPMP, is vice
president, Sustainable Operations
Services at Primary Integration Solutions, Inc.,
the Charlotte-based commissioning business of Primary
Integration (PI). Access his entire archive at
www.missioncriticalmagazine.com/terryrodgers.

By Terry L. Rodgers, CPE, CPMP

Characteristics Of A Culture
Of Excellence
Doing everything right every time.

ver the last few years I have been fortunate to have


toured many critical facilities including performing
in depth reliability assessments of over 45 sites in
20 countries across five continents. I have inspected literally
millions of square feet of computer room spaces and supporting infrastructure. I have interviewed and evaluated facilities
management staff and their processes and compared their
performances against their corporate standards and industry
best practices. What I have seen is a broad cross-section of
compliance ranging from marginal to awesome.

Ive also noted that in many


instances excellent lighting
promotes excellent housekeeping,
and the opposite is also true.
Dimly lit spaces tend to get less
attention. Good housekeeping
is not only superficial, but also
substantive in that a leak, stain,
debris, or other discrepancy
stands out and begs to be
corrected.

10 |

Mission Critical

SEPTEMBER/OCTOBER 2015

What I have also noticed is that in almost every case my


first impressions based on a familiarization tour and initial
staff interviews pan out to be accurate in the long run. There
are obvious telltale signs that quickly reveal what the culture
is for any given site. General housekeeping and cleanliness,
organization, institutional knowledge, and availability of
accurate site specific documentation are just a few aspects that
are indicative of how well the site is managed.
In the first sentence of my first column for this magazine
I wrote, Discipline, rigor, experience, training, process
driven procedures, and a culture of excellence; thats what
it takes to deliver continuous operations over the life of a
critical facility. Everything I have seen over the last few
years reinforces this statement. What follows are some
characteristics that are common to the sites I have visited that
have a culture of excellence.

INDICATORS
General housekeeping is one of the first and most obvious
indicators of how much pride and attention the staff has in their
site. Some sites are relatively clean, especially in areas where
people are most likely to traverse, and some are, well, less so.
As you move through the site and inspect the less traveled
spaces such as mechanical and electrical closets, tank rooms,
roofs, etc., the level of cleanliness and housekeeping tend to
drop off. When instead you find even the most remote and least
accessible spaces to be clean and clear of debris, dirt, stains,
etc., it is obvious that the staff enforce a high standard of care.
Ive also noted that in many instances excellent lighting
promotes excellent housekeeping, and the opposite is also
true. Dimly lit spaces tend to get less attention. Good
housekeeping is not only superficial, but also substantive in
that a leak, stain, debris, or other discrepancy stands out and
begs to be corrected.
Another obvious characteristic follows the old saying a
place for everything and everything in its place. In a recently

WE HATE TO
INTERRUPT.

100% RELIABLE.
DESIGNED FOR ZERO
POWER INTERRUPTIONS.
From diesel generator sets with an industry-leading average
load factor to gas-powered cogeneration systems, MTU Onsite
Energy provides a constant flow of power. And peace of mind.
For more, visit numbers.mtuonsiteenergy.com.

24/7

Power interruptions can be costly. Count on MTU Onsite Energy


for reliable power generation solutions whenever and wherever
continuous power is needed.

60

With 60 years of power generation expertise and a century


of engine innovation, MTU Onsite Energy provides complete
solutions that are trusted all over the world.

85%

MTU Onsite Energy diesel generator sets are certified at an 85


percent average load factor over 24 hours, significantly higher
than the 70 percent average load factor required by ISO 8528.
The result? Lower engine stress, reduced maintenance and
prolonged engine life.

90%

Improve your profitability and reduce your dependence on


the utility with MTU Onsite Energy cogeneration systems.
Powered by natural gas or biogas, these units produce thermal
and electric power from a single fuel sourceachieving more
than 90 percent efficiency.

7 OF10

Data centers need 100 percent reliability or they risk lost data
and dissatisfied customers. Seven of the top ten online companies
rely on MTU Onsite Energy for a constant flow of power. And
peace of mind.

THERES STRENGTH
IN NUMBERS.
MTU Onsite Energy diesel and gas generator sets provide
reliable power for a wide range of applications around the
world. And with our worldwide distribution network, support
is always nearby.
To learn more, visit numbers.mtuonsiteenergy.com.

NUMBERS.MTUONSITEENERGY.COM

MTU Onsite Energy


A Rolls-Royce Power Systems Brand

visited site, this practice was followed to perfection. Upon


entering every mechanical or electrical room there would be a
first aid kit, emergency flashlight, and floorplan. At least one
laminated and framed single-line diagram would be posted in
the room, with the portions that reside in the room annotated
by dotted line borders and color coded. These were hung by
string and wall hooks so the diagram could be removed and used
by the staff while standing in front of the respective gear and
equipment, but they were always returned to their rightful place.
Ladders, tools, and portable equipment were stored in designated
places identified by color-coded tape on the floor, and the only
items allowed to be stored in the room were those that were
applicable to the rooms purpose. Any parts or materials in the
space were directly related to the systems and equipment in the
room and otherwise the standing policy was that these spaces
were not for general or unrelated storage.
Signage, labeling, and color-coding combined with intuitive
conventions are also indicative of how standards are employed.
The best sites typically have comprehensive use of color
coded infrastructure and standardized labels such that upon
entering a room everything is easily understood. Conduit and
piping systems are simple to trace when they are painted or
otherwise color coded. Labels that not only identify the system
and/or equipment, but also conform to logical identification
conventions, can provide lots of critical information at a glance.
An example is electric panels with labels indicating what system,
switchgear, and breaker the panel is fed from and with color
codes that indicate whether the service is utility only, backed
up by generator, or on UPS. This becomes even more important
for sites with rooms and redundant systems that look similar if
not almost identical such as A and B switchgear, UPS, and
other infrastructure that otherwise could lead to human error due
to misidentification of equipment especially during emergency
or anomaly responses.
Easy access to site specific and accurate documentation is
another characteristic of a culture of excellence. How a request
for a drawing, manual, procedure, or other critical document is
responded to is a clear indication of how well the site manages
documentation. When the document is produced with ease and
the staff is confident it is current and accurate, there is likely a
formal document control system in place and enforced. When it
takes several tries to locate the document, and then it is provided
with the caveat that it may not be accurate, then there either
is no formal document control process or it is not enforced.
Regardless, the value of the information is reduced since it isnt
readily available and cant be trusted.

HIGH STANDARDS A MUST


I could continue with an almost endless number of other aspects
and indicators of what constitutes a culture of excellence. What

is consistent is that in all cases there is a very high standard of


what is considered acceptable and expectations that all staff will
not only comply, but will collectively enforce compliance by
others including teammates, contractors, visitors, and everyone
else. This means when something falls below the standard, it
gets resolved immediately. Messes are cleaned up, missing
labels get replaced, leaks get repaired, documents get updated,
and obsolete versions get archived. As parts and materials
get used, the stock gets replenished. Tools, materials, and
equipment get returned to their proper place. Staff get trained,
drilled, and recertified whenever systems are modified or the
site infrastructure changes. Contractors are supervised and their
work inspected before they are allowed to depart or their work
accepted.
As I also stated in that first article, the key is to do three
simple, but very difficult things:
Do everything
Do everything right
Do everything right every time
There is one other very important characteristic that is required
to foster a culture of excellence. There must be a properly staffed
and resourced facilities management organization. Effective
leadership champions the mission, purpose, and needs to
executive management to garner the required budget, resources,
and support necessary to succeed. The leadership must also
establish the standards that define what is acceptable. There
must be good management that can establish both organization
and processes that provide the order and structure needed
to operate and maintain the facility. Management must also
direct and supervise the staff in the execution of its duties
and responsibilities, schedule tasks and activities, and enforce
compliance through discipline and rigor. And last but not least,
there must be sufficient technicians, operators, and staff to do
everything right every time. This means qualified staff with site
specific knowledge, the tools and resources required, and the
skills to perform the tasks and activities assigned.
Insufficient staffing and/or resources inevitably results in a
reactionary culture where staff constantly has to prioritize tasks
and activities and compromise on performance. At first it is
the superficial tasks that get deferred (housekeeping, storage
and inventory control, document management, non-critical
preventive maintenance, etc.), but eventually the standards
arent met, morale degrades, and pride and ownership dissipate.
Basically, the staff no longer can do everything, much less do it
right every time.

REPRINTS OF THIS ARTICLE are available by contacting


Jill DeVries at devriesj@bnpmedia.com or at 248-244-1726.

SEPTEMBER/OCTOBER 2015

www.missioncriticalmagazine.com

| 11

Y VES
T
I
I
R
T
U
C
SERCSPE
PE

Mav Turner is
the director of product
marketing for the security portfolio
at SolarWinds, an IT management software
provider based in Austin, TX. He has nearly 15 years of
IT management experience, including roles in security, systems, and
network administration. Read this article online at
www.missioncriticalmagazine.com/mavturner.

By Mav Turner

What We Forgot About Server


Virtualization
A refresher in virtualization security.
emember back 10 years ago when there was still a
question as to whether you should virtualize your
data center or not? Back then, there were a lot of
interesting security arguments levied against virtualizing servers. Many of those arguments were the standard fear, uncertainty, and doubt surrounding any new technology adoption.
However, there were also some really relevant concerns that
weve forgotten, but probably shouldnt have.

If an attacker can gain even


marginal control or data from the
host, then you should consider all
of the guests compromised.

ISOLATION
One of the most important of these concerns has to do with
isolation. Are virtual machines (VMs) truly isolated if they
are running on the same hypervisor? Does a guest VM pose a
threat to the host and other VMs running on that host?
Although there have been occasional vulnerabilities
discovered that allowed escape from a guest to a host, in
general, this concern hasnt manifested in any wide scale
breaches. However, you should still design with this risk in
mind, particularly for systems hosting confidential data and
guests that bridge different security zones.
If possible, group these machines together to minimize
exposure in an attack. Raw access to resources increases that
risk, so be careful of granting this level of access. Most of the
security concerns are not about direct jumping from guest to

12 |

Mission Critical

SEPTEMBER/OCTOBER 2015

guest, or at least no more than the standard attacks that applied


to physical hosts, but the concern is about exposure to the
hypervisor. If an attacker can gain even marginal control or
data from the host, then you should consider all of the guests
compromised. This may sound extreme, but its the reality of
the security model in a virtualized data center.
Since the host is so critical to your overall security
architecture, its also important to manage it with secure
protocols, limit who has network and account access, audit
that access, and keep it up to date with patches. If an attacker
gets full control of a single host, not only will they have full
control and access to the guests running on that host, but they
will likely have very broad network access, too.
Since most hosts contain VMs performing different functions,
multiple VLANs are often trunked in. To make things easy and
limit the requests on network teams, the full set of VLANs will
often be setup, even if the current guests are only using a small
subset. As network virtualization gains traction, this will only
continue. Again, this is an important time to think about what
access each VM actually needs and which network they need to
be on. If an attacker has access to management VLANs or other
sensitive networks, they have easily compromised the entire
network, not just the single host.

IMAGES AND SNAPSHOTS


A second concern that wasnt fully appreciated at the time
but that has grown as a real risk is the security of images
and snapshots. Although the technical threat was understood,
there was little appreciation for the sheer number of these files
floating around.
If an image is compromised, its the equivalent of someone
powering off a server and walking out of your data center
with it. Thats bad, and its much easier to do than ripping a
production system off the shelf and making a run for it.
Snapshots are equally dangerous because they can contain
the data running in memory at the time of the snapshot.

Credentials are the biggest concern for leakage here, but any
data handled by the machine is at risk.
We often talk about VM sprawl from the powered on machines
perspective and forget about all of those images and snapshots
lying around. You need to define a clear plan for where those
files are stored, who has access, how the access is audited,
and when you should delete the old snapshots. Storage costs
arent always the driving factor for better image and snapshot
management, but security certainly should be.

LEGACY TECHNOLOGY
The third and final security issue with server virtualization
to consider is how it enables insecure legacy technology to
remain in your organization. This is one of the big benefits
of virtualization, but needs to be managed properly. That
application that only runs on Windows XP and hasnt been or
cant be patched is a huge hole in your defenses.
If you cant migrate to a more secure solution, make sure
you have walled off such servers and applications as much as
possible. You might need to use local account privileges so it
doesnt have access to the domain, and it definitely should be
segmented from a network perspective as much as possible. If

the machine doesnt need internet access to function, you should


not allow it to connect outbound. It might take a few extra steps
for users or administrators to access, but it is well worth it given
the security risk old operating systems and applications pose.

IN CONCLUSION
Server virtualization is not only here to stay, but it will
continue to expand through the stack into fully virtualized data
centers. By understanding the principles of virtualization as
a technology, and recalling the initial concerns we had when
server virtualization as we know it now was the trendy new
technology, we can better manage and secure our virtualized
data centers.
Just because its common now, doesnt mean we can forget
our original concerns and assume all of the problems have
been solved. As we enter a time with hyper-converged data
centers, remember the journey and apply those early lessons in
virtualization as complexity increases.

REPRINTS OF THIS ARTICLE are available by contacting


Jill DeVries at devriesj@bnpmedia.com or at 248-244-1726.

Every minute of downtime costs data


centers an estimated $8,000.
That means accurate, reliable power system testing
is crucial. ComRent Load Bank Solutions provides
resistive, reactive and capacitive load banks in low
or medium voltage. Our rack-mounted load banks
precisely simulate high-density server
loads and airflow for the best
hot aisle/cold aisle testing.
With ComRent, you
test better.

LOAD BANK SOLUTIONS


Our Knowledge, Your Power.

ComRent.com
1-888-881-7118

comrent1921_MissionCritical_handheld_7x4.875.indd 1

7/20/15 3:57 PM

SEPTEMBER/OCTOBER 2015

www.missioncriticalmagazine.com

| 13

CIO
E
T
GEOF TH

R
OLE
R
A
T
ING
V
L
NO

Paul Schlattman
is senior vice president, ESD
Consulting, Chicago, IL.
Access his entire archive at
www.missioncriticalmagazine.com/paulschlattman.

V
O
EE
H
T

By Paul Schlattman

Disaster Recovery And The CIO


When it comes to disaster recovery, everything is sacred.

ve been in the data center industry for a long, long time now
too long. One of the largest clients of my career was Comdisco.
As principal, I was in charge of the design of a majority of
their data centers nationally and internationally. Comdisco was the
pioneer in the disaster recovery (DR) industry since the 80s. While
I was the principal in the design of several projects, I also was an
alliance partner in their consulting practice. With this said, I was
continuously involved with the DR plans and design criteria around
supporting these plans.
Recently, I conducted interviews with an enterprise client that
discussed the levels of criticality within their applications. Their
response to criticality was similar to other enterprise clients and the
method to identify critical applications was to create a Tier program
Tier 1-3, with three being the highest and most critical applications (or vice versa). The problem with this antiquated method of
categorizing applications into tiers is that what may not be critical
to you, may be critical to me. If I am working in an application that
goes down, while it may not have a direct effect on the business, it

As plans are created for a DR


site, several items need to be
addressed. Does latency and
distance drive criticality and
recoverability? Since many
disasters are local/regional, is
the secondary DR site off the
grid of the primary DR site?
Are there remote hands that
are knowledgeable of DR
applications at the remote
DR site?

14 |

Mission Critical

SEPTEMBER/OCTOBER 2015

FIGURE 1. Digital Realty recently built a facility in South


Bend, IN.

does reduce productivity. As the use of technology increases and the


dependence on it is greater, more applications are seen as critical
and not secondary. While losing email in the cloud may not have as
direct an impact as a financial application, the loss of email breaks
down communication.

TIER II CITIES
As plans are created for a DR site, several items need to be
addressed. Does latency and distance drive criticality and recoverability? Since many disasters are local/regional, is the secondary DR
site off the grid of the primary DR site? Are there remote hands that
are knowledgeable of DR applications at the remote DR site? As I
look around the Midwest, I see several opportunities for data center
development concerning Tier II cities and the regions they serve.
One client, Data Realty, recently built a 50,000-sq-ft data center
in South Bend, IN. The site is a greenfield development offering
numerous benefits that other sites dont. While one might think
Why invest in a data center in South Bend? the location is actually
brilliant. Data Realty in South Bend can support Chicago and Indianapolis for both DR or as a primary site. As a DR site, South Bend
is not on the ComEd grid, and is additionally not on the Indianapolis
Power and Light grid. Therefore, the location exactly compliments
the DR strategies of both cities. This coupled with hands on management of applications during a crisis, makes Data Realty the preferred
choice in selecting a DR site.
While looking at the success of Data Realty in South Bend, I ask

myself Why cant other Tier II-III cities model this program?
Lets examine Milwaukee (Tier II city).
If youre a wholesale/colocation company, I can point to 3 MW
of demand in Milwaukee with little or no supply. Yet no one seems
interested in building a data center in Milwaukee or Madison, WI,
which is even better. Madison can support Chicago, Milwaukee, and
Minneapolis.
Several of the large collocation providers addressed Tier II cities as
if they were a larger market by building large data centers. They didnt
right-size their prototypes to support the market, and are now selling
their data centers in these markets. If addressed properly, Tier II cities
will provide a strategic play in DR as well as edge compute.

EVERYTHING IS SACRED
The business protocol for subscribing to a disaster recovery plan has
been to only back up what is critical. Due to interdependencies from
application to application, the constant need for all applications and
storage area networks creates a different DR plan than what weve seen
in the past. This combined with the proper location create an overall
DR plan that is safe and effective.

NEW CONSIDERATIONS FOR THE CIO


While disaster recovery criteria has been established for over 30
years in the industry, new technology drives a different criteria than
previously identified within the enterprise data center market. Some of
the new considerations include:
What is your cloud providers DR plan? Can you review their plan
prior to subscribing to cloud services?
Is your internal cloud or hybrid cloud recoverable?
While operating in a recovery cloud situation, is your network secure
and reliable?
Since several people now commute, and utilize a virtual office
approach, is your recovery plan accessible nationally and internationally?
DR testing is not just exercised in critical applications, but should be
tested in the virtual world directly with the endusers.

REPRINTS OF THIS ARTICLE are available by contacting Jill


DeVries at devriesj@bnpmedia.com or at 248-244-1726.

ENGINEERED TO PERFORM

800-640-3141 | MIRATECHCORP.COM

EMISSIONS CATALYSTS HOUSINGS SILENCERS SCR DPF SERVICE TRAINING TURNKEY

SEPTEMBER/OCTOBER 2015

www.missioncriticalmagazine.com

| 15

Investigate the practical use of computational fluid


dynamics in the design of mission critical facilities.

n todays 21st century business environment, the need


for efficient data centers is increasing at unprecedented
rates as the demand for computing, processing power,
and data storage grows exponentially. The energy
consumption in a data center can be significantly more
than a typical office space, and a considerable portion of the
energy cost (30% to 50%) is dedicated to the data centers cooling system. More than ever, IT equipment is getting smaller in
size yet more powerful, and the need for a proper and efficient
cooling system design plays an important role in saving energy.
The new generation of computers operates under higher temper-

atures, which does reduce the cooling cost and makes it possible
for a higher computer intake temperature (80 to 85F). However,
going beyond the intake temperatures design criteria can cause
overheating and IT equipment to be more susceptible to failure. As
a result, the need for accuracy and a scientific-based design of the
data centers thermal management requires the use of advanced
engineering tools such as computational fluid dynamics (CFD) to
parameterize and visualize variable designs. CFD enables design
engineers to recognize issues at early stages of the design and
tackle the engineering challenges that cannot be solved accurately
using a conventional design approach.

By Dr. Reza Ghias

CONTAINMENT DESIGN

Dr. Reza Ghias is the director of Advanced Simulation


Center (ASC) at Southland Industries, a national MEP
building systems firm. With more than 15 years of
experience conducting research and executing
computational fluid dynamics (CFD) projects in a wide
range of industries, he works closely with Southlands
design engineers and clients to overcome design
challenges and develop innovative building systems designs. Reza
has received his Ph.D. in Mechanical and Aerospace Engineering
and has authored and presented many papers, articles, and technical
reports proving the results of his work. He can be reached at RGhias@
southlandind.com.

16 |

Mission Critical

SEPTEMBER/OCTOBER 2015

As air passes through servers, its temperature rises. The recirculation of this hot air into the intake can eventually cause equipment failure. Installing a containment and chimney configuration
can prevent the mixture of cold and hot air that forms hotspots
while also improving the cooling system efficiency. In order to
justify the installation costs and confirm potential energy savings, CFD should be applied during containment design. The
current airflow situation in existing data centers can be investigated and the possible hotspots under the data hall's design can
be predicted through room simulation and temperature impact
evaluation. Figure 1 compares the temperature contours at 4 ft
above the floor for a data hall with and without containment/

The last thing an emergency response center


needs is a power emergency.
Our free, non-commercial information can help keep you up and running.
People depend on you to keep your facility operational 24/7/365, yet you are vulnerable every time
theres a thunderstorm even if your facility meets code. An average lightning strike carries over
20,000 amps, enough to destroy expensive equipment and put the lives of the people counting on
you at risk. Simple, inexpensive wiring and grounding improvements can help protect you from the
devastating effects of the downtime emergency centers like yours suffer each year. CDA can help with
free CDs, DVDs, and case histories. Well even conduct free seminars
for groups. Go to www.copper.org/PQ and find out what you can do
right now to help stop an emergency before it strikes.

CFD And Mission Critical Facilities


cold and hot air in a data center. It
is important to also investigate the
impact of structural gaps in data center design. Air can penetrate the gaps
and cracks that exist in the cabinet
structure between the containment/
chimney and racks. It can also enter
a failed server when its fan cannot overcome the pressure gradient
between the cold and hot aisles.
Depending on the location and size
of such gaps, hotspots can form or
FIGURE 1. (a) The temperature distribution in the data hall with no containment. (b) The rack
the cooling load can become wasted,
arrangement in the data hall. (c) The temperature distribution in the data hall with containment.
despite the investment in containment
(d) Containment (shown in green) separates the hot air in the back of the servers and cold air at
and chimney installation. CFD can
the intakes.
model the impact of the gaps and
provide valuable information to predict the issue in advance
and enable the design to be improved. Further, hot air recirculation and cooling load leakage occur when enough pressure is present to force the hot or cold air through the gaps.
Thus, the areas with a higher IT load are more susceptible to
hot air recirculation and the areas with a lower IT load are
prone to cooling load leakage. Figure 2 shows the recirculation of hot air through gaps between the ceiling and containments in a data hall at the area with a high density IT load.

MATERIALS AND INSULATIONS


The materials used in data center buildings such as racks,
cabinets, and containments hold different thermal capacities, so heat resistance must be considered during the
design. For example, heat transferred through the ceiling,
cabinets, and containments has an impact on thermal management. Choosing the proper materials with reasonable R-values reduces
the heat transfer between the hot and
cold aisles. The heat transfer rate
increases with higher temperature differences between the cold and hot
sides. CFD helps model the outcome
of using materials with different heat
resistance at various temperatures in
a data center. Figure 3 illustrates side
FIGURE 3. (a) The temperature distribution on the wall of the racks and containments exposed
wall diffusers located on the right
to cold air. (b)The growing of the thermal boundary layers on the wall of the containment and
ceiling increase the intake air temperature at the servers. Diffusers located on the wall can be
side of a data hall. It is clear that the
seen on the right.
thermal boundary layers grow over
the surface of the containment, and
the ceiling influences the intake temperature of the servers
chimney. The results show that the maximum temperature
located at a higher height.
was reduced from 125F (no containment) to 95F (containment) due to preventing the recirculation of hot air.
FIGURE 2. (a) The pressure distribution in the data hall at 6 ft above
the floor. (b) The temperature distribution in the data hall at 6 ft above
the floor. (c) The recirculation of hot air from the attic into the data
hall through the gaps between the ceiling and containments at the
high density IT load area. (d) The ceiling gaps left for ceiling deflection
should be sealed in critical areas.

18 |

GAPS AND CRACKS

RISK ASSESSMENT AND CONTROL


STRATEGY

While the use of a containment and chimney configuration is effective, it is not a standalone solution to separate

It is imperative to consider and plan for possible failure


components in the cooling system to prevent any IT damage

Mission Critical

SEPTEMBER/OCTOBER 2015

K.I.S.S.

Enclosures

Fiber Cabling System


[Keep It Simple Stupid]

1U/72 LC $1673
Free Shipping

Patch Panels

Patch Cables
Our customers asked us to keep fiber cabling simple, so we
did! We are in the 21st century and everything should be
Faster, Easier, and Simpler; including the fiber cabling system.
So we removed the complicated designs from our enclosures.
We pre-assemble every part, according to your spec, right
here in Southern California. There are no ten page assembly
instructions. All you need is a screwdriver to install four
mounting screws (included) and you are done!
Our Bundle6 fiber patch cords come bundled and labeled
with six cables in a protective box. No field termination, no
untangling, and no messy plastic bags. All you need to do is
plug them in!

LC/1m $5.613

Free Shipping $99+

TM

Dont let anyone tell you that fiber cabling is complicated


and therefore should be expensive, its not! Buy Cablesys
Fiber Cabling System direct, and your life will be much easier,
simpler, and less expensive like 50%2 less!

Cabling System Simplified


cablesys.com/fiber 800-555-7176 cs@cablesys.com
Copyright 2015, Cablesys 1. KISS Principle is a design principle noted by the US Navy. 2. Compared to big name brands.
3. Prices are subject to change.

NIOs

CFD And Mission Critical Facilities


or interruption. These can occur during the failure of one or more computer room air handlers (CRAH), or
during a power outage; or if an unexpected recirculation occurs. There is
no conventional tool to simulate these
failure scenarios, but they can be
modeled using CFD. CFD can predict
the length of time it takes each temperature to raise to the point so that an
applicable solution to the data center
is realized. Figure 4 shows a failure scenario in which three CRAHs
FIGURE 4. (a) Temperature distribution in the data hall at 3 ft above the floor with three failed
failed at the same time. In this examCRAHs and an adjusted flow rate at adjacent CRAHs. (b) The location of the removed containple, Southland Industries, a national
ments. (c) The temperature contours show hot air recirculation from the attic into the data hall.
(d) The temperature contours show cold air leakage from data hall to attic.
MEP building systems firm, used
CFD to calculate the right amount of
cooling load through increasing the
air flow of the adjacent CRAHs, as
opposed to intensifying the airflow of
all CRAHs. This compensated for the
failed CRAHs in an efficient manner
and ultimately conserved energy. This
figure also shows that containments
were removed at different locations
in the data hall. Containment removal at some locations can cause hot
air recirculation and is more crucial
in locations where the containment
removal causes cooling load losses.
CFD also can be used to locate
the appropriate locations for control FIGURE 5. (a) Setting up a zonal control strategy in the data hall to balance the airflow. (b) The
temperature distribution in the data hall at 4 ft above the floor with adjusted supply air based
sensors, or to devise a smart control
on the local IT load. (c) Velocities contours (ft/s) at f4 ft above the floor with equal air supply at
strategy that balances the supply air- each zone. (d) Velocities contours (ft/s) at 4 ft above the floor with an adjusted air supply based
flow with IT density in a data hall. on corresponding IT loads at each zone.
This alleviates high velocity called
the wind tunnel effect that occurs as a result of rushing the
entrainment of generator engine emissions or other particles
air from a lower IT density to a higher density area. Figure
into outdoor air (OA) supply can damage the computers. For
5(a) highlights the zonal control strategy that balances the
these reasons, it is important to design and control the data
airflow supply based on the non-uniform IT density in the
center for the right humidity ratio. CFD can aid design engihall. Figure 5(b) shows the temperature contours at 4 ft
neers in the investigation of potential humidity issues inside
above the floor with adjusted air supply proportional to
the data center. It can also expose any particle entrainments
local IT loads. Figures 5(c) and 5(d) show the comparison
and high humid air in the data center, ensuring that the air
between the velocity contours (ft/s) in the data hall with
quality meets the design criteria. Cooling towers, emerequal air supply at each CRAH, as well as the adjusted air
gency generators, air exhaust, and suspended particles (e.g.,
supply based on local IT load. The illustration shows that
sand grains) are various sources of high humidity air and
the high velocity region in the middle of the corridor has
particles. Figure 6 shows the outside view of a mission critibeen alleviated in the adjusted air supply case.
cal facility. In this example, Southland Industries employed
CFD to calculate the cooling tower water particles, generator emissions, and humidity concentration at OA under difPARTICLES ENTRAINMENTS
ferent wind speeds and directions. This verified the approHigh humidity in a data center can cause condensation, corpriate location of the emergency generators, cooling towers,
rosion, and electric shortage, while low humidity can cause
exhaust air, and OA in the design.
an electrostatic issue that harms the system. Moreover, the

20 |

Mission Critical

SEPTEMBER/OCTOBER 2015

Upgradeable Rack
PDU Intelligence:

Designed for tomorrows


data center, today.

Independently tested to 60C operating


temperatures, Geist Upgradeable PDU
consists of a robust chassis featuring
hot swappable intelligence; currently
offering remote power and environmental
monitoring options, and upgradeable as
technology evolves.
Fit Geist Upgradeable PDUs once and
the future will come to you.

geistglobal.com/upgradeablePDU

CFD And Mission Critical Facilities


fusers in the tank to increase the
performance by 24%. As part of the
commissioning effort, the installed
system was tested to the same conditions originally simulated in the CFD
model. The CFD results were within
2% margin of error and saved the
customer time and money on projects
during the building phase.

CONCLUSION

FIGURE 6. (a) The location of the cooling towers, emergency generators, exhaust, and OA air
intakes. (b) The high humidity air from the cooling tower at a low wind speed. (c) The high humidity air from the cooling tower based on the highest wind speed and worst direction in the area.
(d) The water particle tracks from the cooling tower based on the highest wind speed and worst
direction in the area. (e) The gas emission and particle tracks from the emergency generator. (f)
The gas emission and particle tracks from the emergency generator in far field.

FIGURE 7. (a-f) Temperature contours in the vertical cross section of the tank at different times
during the discharge process.

COMPONENT EVALUATION
Many challenges can be encountered during the design of
mission critical facilities. Manufacturers typically test and
validate most of the components under specific and controlled environment conditions. CFD can be used to model
the performance of the equipment including air-handling
units (AHU) and the humidifier, or of a new system under
different design conditions. As a result, any possible problems can be predicted and planned for in advance, which
brings more confidence to the design but more importantly,
efficiency and effectiveness. Figure 7 illustrates a pressurized Thermal Storage Energy (TSE) system. Southland
Industries implementation of CFD optimized the dif-

22 |

Mission Critical

SEPTEMBER/OCTOBER 2015

Many factors, ranging from IT load,


diffuser size, humidity, and rack size
to failure scenarios, ceiling height hot
spots, and many more, have an impact
on the performance of mission critical facilities. In order to save energy
and cut down on costs, these must be
considered during the design or renovation process. The cooling system
design of these facilities continues to
be even more challenging when the
goal is an optimized design, yet engineers push the limit to save energy
and costs. CFD is a reliable solution
that can produce results with accuracy. Implementing the right model in
collaboration with a partner experienced in both the HVAC industry and
CFD software can shorten the design
procedure and optimize the design
effectively. The virtual design used
during this process allows owners,
engineers, and architects to visualize
the outcome, predict critical scenarios, and propose practical solutions
prior to installation in a manner that
is more accurate than conventional
approaches and less expensive.

REPRINTS OF THIS ARTICLE are available by contacting


Jill DeVries at devriesj@bnpmedia.com or at 248-244-1726.

Read this article online at


www.missioncriticalmagazine.com/drrezaghias

Simply Intelligent
Air Flow Management
Eliminate hot-spots while
collecting critical information
Why settle for passive containment when there
is an intelligent, cost effective option available to
address heat issues, now and into the future.
Maximize rack and room density to achieve bestin-class PUE by stabilizing intake air temperature,
staying within a few degrees of the supply air, all
while collecting intelligent feedback.

geistglobal.com/products/cool

Real-time or near-time? Find out what is right for your facility.

key component of todays data center infrastructure management (DCIM) systems is


gathering and analyzing live data associated with the data center. This can represent
thousands of points of information such as
temperature, power, capacity, or status of any number of devices,
meters, or sensors throughout the data center. The collected
DCIM information can easily venture into the Big Data realm
with not only collection of information, but also storage of millions of samples of historical values.
As an industry term, DCIM has been convoluted over the years
as multiple vendors use the same term to define significantly different feature sets. While DCIM is taking a more defined shape,
the term real-time in regards to data collection is in danger of
falling into that same confusing realm for an enduser.
Our team recently heard an enduser say that their DCIM
provider gave them real-time information as one sample each

By Matt Lane
As a co-creator of Geists DCiM solutions, Matt Lane
has over 14 years of experience working in data center
monitoring and product development. He brings a
wide range of experience as an entrepreneur, business
owner, and manager. He is currently the president
of Geists DCiM division which provides customized
solutions for data center monitoring.

24 |

Mission Critical

SEPTEMBER/OCTOBER 2015

day. There were hundreds of thousands of data points and the


software could only accommodate a single poll of each data
point every day. Naturally, that enduser was disappointed and
discouraged as their expectations of real-time data were far from
what the vendor actually produced.
Another term beginning to be heard across the industry is
near-time, and is a more accurate description of what most
DCIM systems provide. Another popular term with a separate
meaning is extended interval. At Geist we have worked hard
to define these three terms in the following way:
Real-time: a continuous sampling of data sets with a refresh
cycle of seconds.
Near-time: a sampling of data sets separated by more than a
minute but less than one hour
Extended interval: any sampling of data that is delivered less
frequently than once per hour.
These three rates of refreshed information have their own distinct use cases, along with pros and cons for each. There isnt a
one-size-fits-all approach to collecting live data. The users need
is the key driver to determine what data needs to be collected
and at what rate.

THE BENEFITS OF REAL-TIME DATA


It might be best to illustrate the benefits of real-time data with a
real-life case; a colocation provider that prior to the installation of

DCIM Environet:
The whole picture through
a single pane of glass

geistglobal.com/products/dcim

The Right Time For DCIM


DCIM had been manually logging their tenants power usage
in extended intervals. Approximately four times per day they
would take physical readings, record them in a spreadsheet,
and then evaluate the spreadsheet monthly to ensure that the
tenants were all staying within their power SLAs.
After deployment of an alternative DCIM system, they
captured data on a real-time basis and then stored that data
for historical review. At the end of the first month, the reports
derived from their real-time system were quite astonishing.
The original extended interval logging had gaps large enough
that there were significant differences in what was reported
the prior month with what was being reported through this
new system. In the end, the colocation provider realized
they had several customers that were over-utilizing their
prescribed capacities for power. As a result, they were able
to renegotiate their service agreements and the cost of the
DCIM implementation was recouped in a matter of months.
Who says DCIM doesnt have a tangible ROI?
Beyond this short illustration, real-time data collection
has many benefits.
Warnings and alarms. With data refreshed within seconds, users can be alerted to threatening situations and
react quickly. Real-time information may help them see
issues before they become problematic, allowing the
operator to move from reactive into a more predictive
management state.
Highest accuracy of data. With frequent polling comes
the opportunity to store additional detailed historical
information for use in data analysis. A high sample rate
ensures that quick spikes and sags in readings are captured.
Reporting and trend analysis. Real-time information
provides an increased level of detail when it comes to
reporting and identifying trends. The data center environment can change quickly and having a higher data refresh
rate ensures that the user sees the entire picture.
Validation of capacities. A database of devices and their
anticipated power draw is included in most DCIM systems
today. Real-time data allows the user to utilize the most
precise data to validate their nameplate or de-rated assumptions to ensure maximum usage to their full capacities.
Operational awareness. Data center operators can frequently be seen entering the critical environment to take
readings, assess an audible alarm, or to just generally
evaluate the status of the site. Having real-time information
accessible through their DCIM system allows access to that
information in a more convenient and holistic way, giving
greater understanding into many aspects of their operations.

THE DRAWBACKS TO REAL-TIME DATA


Cost of implementation. It takes a significant amount of processing to collect and manage all of that real-time information,

26 |

Mission Critical

SEPTEMBER/OCTOBER 2015

translating into higher implementation and system costs.


Data overload. It is important that a real-time data collection tool has intelligent and simple ways to make sense
of all of the collected information. Good user interfaces,
graphical representations, and reporting engines are a
must to avoid information overload.
Extended network and processing resources. Big Data
brings with it the challenge of passing vast amounts of information across LANs and WANs as well as processing and
storing all the data collected. An efficient tool needs to be
harnessed to ensure performance of the application remains
high without degrading other systems in the process.

WHEN NEAR-TIME DATA IS HELPFUL


Near-time data can be somewhat less taxing for a system to
collect and manage and can provide a number of benefits to
DCIM users.
Validation of capacities. While it may not have the same
number of samples as provided by real-time data, when
collected at reasonable near-time intervals data can provide valuable insight into actual readings and associated
trends that can be used to validate assumptions made in
modeling capacities.
Replacement of sneaker reports. We see many organizations that still use technicians to walk the data center floor and take manual readings at defined intervals.
Because those types of reports are completed on a somewhat infrequent basis, near-time data can provide at least
a one-for-one replacement and free up an employees time
to work on more productive tasks.
General planning and architecture. Near-time data can
be adequate when high-frequency operational awareness
is not required, but when general planning and visibility is
sought. A lot of data can still be gleaned from a poll rate
of 15 minutes that will provide accurate enough information to aid planning and data center growth decisions.

THE DIFFERENCE BETWEEN


REAL-TIME AND NEAR-TIME
Real-time data collection and near-time data collection have
many of the same benefits, but there are certain operational
elements that are not available when using a near-time rate.
Some of those could include:
Delayed warnings and alarms
Failure to capture short bursts or periodic changes inbetween polling cycles
Not enough detail to fully examine an event
The main difference between the two polling rates is the
effect on operational awareness. As an example, if the poll cycle
is every 15 minutes, and a 10-minute power outage occurs, the

All the capabilities you want...


and some you may not have considered!

Custom Designed and Built


Power Control Systems
With mission-critical data and life safety at stake, why would you try to
make do with a standard catalog power control system? Why would
you settle for a less than optimal solution?
Dont compromise. Insist on a powerful, flexible, fully-integrated
Russelectric Power Control System... custom-designed and built
with your specific needs and wants in mind. Russelectric will custom
tailor a power control system that not only provides all the operating,
monitoring, and reporting capabilities you want, but one that is intuitive
and easy to use because it is based on your way of doing things. And
because Russelectric specializes in custom systems, we can often
suggest features and capabilities you never even considered.

1-800-225-5250

russelectric.com

An Employee-Owned Company / An Equal Opportunity Employer

The Right Time For DCIM


ability to collect information about how the load transferred and
returned to normal, how the temperatures were affected, and generally review the entire event is simply not possible.
When monitoring power specifically, a near-time polling cycle
can easily miss spikes and sags or simple deviations in workloads that can change rapidly.
If operational awareness and greater in-depth analysis of events
is a critical factor to the success of the DCIM system, near-time
data collection is likely not the answer. Real-time polling provides
the granularity of information needed for those technicians that
are responsible for continuous equipment operation.

WHEN IS EXTENDED INTERVAL RIGHT


FOR ME?
Extended interval polling is a very sporadic collection of information. This kind of data would be more useful at a macro level.
For instance, having a daily sample can give good information
into rounded readings like max megawatts utilized. During the
course of a normal day, there is too much variation in power
readings to put much stock in a single time sample.
A good use case for extended interval would be for global
capacity planning. An executive level user could be tasked with
determining when to build a new data center or when to consider

FIND
THE
RIGHT

SUPPLIER
DONT WASTE PRECIOUS TIME SEARCHING FOR SUPPLIERS
Turn to the Mission Critical Buyers Guide for companies in the data center and
mission-critical facility solutions industry.

GO TO:

www.missioncriticalmagazine.com/buyersguide

or SCAN THE CODE:

collocating. A small number of infrequent samples could provide


a close enough picture of the power footprint across an organization for the executive to start planning conversations.
Technicians, 24/7 staff, and even managers will be left wanting for more information as they attend to their daily duties in
an extended interval rate. So, in summary, extended interval is
really only effective for high-level planning.

CONCLUSION: YOUR TIME IS THE


RIGHT TIME
The point is that there is no single live data-polling rate that is
best for everyone. However, there is a right polling rate for each
job title group within the data center.
Technicians, operators, NOC staff, and those responsible for the
daily operations of a data center, will likely find a real-time data
collection system most beneficial. It provides the highest degree
of operational awareness and the ability to complete post-mortem
analysis on past events. The other polling rates cannot provide
nearly the level of information required by this group as real-time.
Near-time polling rates are great for those responsible for
detailed planning and reporting. Generally, this responsibility resides with data center, IT, or facility managers who have
a continuing need to analyze capacities when deploying new
equipment and planning for future equipment. These managers
may not need the same level of operational awareness such as
the instantaneous alarming or power quality capture that comes
with real-time levels. However, near-time gives them a very nice
window into how power flows throughout the day and the effect
that has on their working environment.
Data center operators wont have a lot of use for extended interval polling. There simply isnt enough granularity to be of benefit
to the reactive decisions and actions they must take. Extended
interval is a reasonable fit for the executive level group who are
more interested in generalities or data across lots of sites. Having
infrequent measurements still gives them enough data to make
high level decisions that can then be passed down to the managers
for greater evaluation.
In the end, it is most important to establish the business needs
first. Who is using the system? What are they using it for? What
are the goals of the system? What data needs to be collected to
accomplish those goals? If the right scope of work is defined at the
outset of the project, obtaining a system that has the appropriate
level of data polling will be simplified. There is a right choice
for data acquisition frequency and what that is depends on who is
using DCIM.

REPRINTS OF THIS ARTICLE are available by contacting Jill


DeVries at devriesj@bnpmedia.com or at 248-244-1726.

Read this article online at


www.missioncriticalmagazine.com/mattlane

28 |

Mission Critical

SEPTEMBER/OCTOBER 2015

RAISED FLOOR GROMMETS

Saves 48% more energy


than any other brush grommet

The industrys best-sealing brush grommet.


The KoldLok Integral grommet has become
the industry standard for sealing cable cutouts
for a reason: it can reduce a data centers
cooling expenses by approximately $72,000*,
more than any other brush grommet on the
market. And its durability ensures it will stay
in place for years, protecting cables and
preventing wasteful bypass airflow.

With KoldLoks superior sealing effectiveness


and durable design, its easy to see why it has
been the industry standard for sealing cable
cutouts since 2001.
Learn more and download the ASHRAE
report highlights at Upsite.com/KoldLok

Based on independent testing of the 5 leading brush grommet brands.


*Based on ASHRAE Presentation CH-15-037 PLENUM-LEAKAGE
BYPASS AIRFLOW IN RAISED-FLOOR DATA CENTERS

Learn more at U P S I T E . C O M / K O L D L O K

D E SIGNE D A ND
M A NU FA CTU R E D B Y

New design for permanent magnet motors uniquely


delivers ultra-high efficiency at low speeds.

hen it comes to fans and motors more of


a good thing is not always a good thing.
A recent U.S. Environmental Protection
Agency (EPA) study stated that almost
60% of the fans within buildings today
are oversized. The study went on to say that almost 10% of the
fans were oversized by at least 60%. Although the magnitude of
the issue may be surprising, the problem is well known to anyone involved with the design and selection of fan systems. The
conservative approach, often taken when designing and purchasing a fan and motor, results in a product that exceeds the system
requirements and consumes more energy than necessary.

By Andrew T. Holden, P.E.


Andrew T. Holden, P.E., is a sales executive with
NovaTorque, Inc., which is a California-based company
that produces ferrite based permanent magnet motors.
Andy is an industrial engineer from Georgia Tech with a
MBA from Georgia State and has spent almost 10 years
in the HVAC industry as a manufacturers representative.
He is currently focused on representing NovaTorques
motors to engineers, architects, owners, OEMs, contractors, reps, and
endusers. In addition to his time spent in the HVAC industry he has spent
over five years in the electric power generation sector as a consultant
and as a commercial manager with international responsibilities.

30 |

Mission Critical

SEPTEMBER/OCTOBER 2015

Oversizing fan/motor systems can end up creating a host of


other issues, including higher installed and operating costs,
increased maintenance, and possibly a higher level of vibration
and noise. It is very common for the application of safety margins to be compounded through the specification and purchase
process with the accepted remedy being the addition of a VFD,
to ramp down the speed.
The issue becomes even more complicated in HVACR applications with requirements for fan speeds well below that of the
standard, 4-pole, 1,800 RPM AC induction motor.
The difficulty derives from the fact that properly sizing a
motor to lower speed design requirements, for example selecting
an 8-pole 900 RPM, or a 6-pole 1,200 RPM AC induction motor,
must be weighed against the additional cost and inferior energy
efficiency associated with these machines. Hence the most common solution has been to use a lower cost, more efficient 1,800
RPM induction motor, gear it down mechanically with belts and
pulleys, and then control the final desired speed range with a
VFD. Of course, belts and pulleys introduce their own inefficiencies, costs, maintenance requirements, and design complexity.
This issue is increasingly faced in the design and construction
of custom air handling equipment used in mission critical data
center applications. Here energy efficiency is of utmost importance due to the 24/365 duty cycle and the enduser focus on
lifetime operation and maintenance costs. Also, these data center

Heres how it works:


An energy expert can evaluate
your existing data center or
your plans for a new one and
recommend energy-saving
measures, provide cost
estimates and identify
rebate amounts.
Many energy-saving measures
are easy to implement,
nancially attractive, and
environmentally responsible.
An energy efciency study
is not required to earn rebates;
however, if you do conduct a
preapproved study, you can
earn rebates to offset up to
75% of the studys costs,
not to exceed $25,000.

USE US

to lower data center


operating costs.

Xcel Energys Data Center Efciency program can help data


centers and large-scale IT operations improve reliability and energy
efciency. By making improvements to airow, cooling, motors and
lighting, you can save energy and earn rebates to help reduce your
operating costs. As a result, you can get the reliability that IT
professionals want, and the cost savings management demands.

When you implement


preapproved data center
projectsor energy
efciency measures identied
in a studyyou can earn
rebates up to $400
per kW saved.
Start by calling
1-855-839-8862 today.

Contact an energy efciency specialist at 1-855-839-8862 or visit


ResponsibleByNature.com/Business.

ResponsibleByNature.com
2015 Xcel Energy Inc.

Motors For Mission Critical Facilities

applications are tending toward high volume, low static


pressure requirements, best addressed by larger diameter fans run at slower speeds.
Recent advancements in permanent magnet motor
technology are now offering engineers an alternative to
the inherent inefficiency and high cost of 900 RPM and
1,200 RPM AC induction motors.

PERMANENT MAGNET MOTOR


ADVANTAGES

FIGURE 1. Efficiency comparison graph: 5 hp 900 RPM motor.

Permanent magnet motors are more efficient than


induction motors primarily due to the magnetic field
in the rotor being produced by permanent magnets
rather than induced electrically. And, unlike induction
motors, permanent magnet motors maintain their high
efficiency over a broad operating range ideal for
variable speed applications.
Permanent magnet motors also enjoy a higher power
density than AC induction motors. They are able to produce more torque for their relative physical size than a
comparable AC induction motor.
The permanent magnet motors ability to continuously deliver high torque at low speed may also eliminate
the need for gearing or other mechanical transmission
devices in many applications.
Because motor losses (energy inefficiency) translate
to motor heating, higher efficiency PM motors operate
at lower temperatures than comparable AC induction
motors, particularly at lower speeds where induction
motor efficiency drops off much faster than is the case
with PM motors. Lower operating temperatures translate into longer motor life. With time the
exposure to higher levels of heat can degrade
the insulation, ultimately shortening the life
expectancy of a motor. A rule of thumb is
that an increase in motor temperature of 10%
decreases the insulation life by 50%. Bearing
grease life, hence bearing life, is affected,
as well.

RECENT PERMANENT
MOTOR ADVANCEMENTS

FIGURE 2. Efficiency comparison graph: 5 hp 1,200 RPM motor.

32 |

Mission Critical

SEPTEMBER/OCTOBER 2015

Permanent magnet motors have been


available for decades and have been widely
acknowledged to produce higher efficiencies
at a wider range of speeds compared to the
more ubiquitous AC induction motors. The
biggest barrier to adoption of permanent
magnet motor technology in highly competitive HVACR applications has been that
permanent magnet motors have been cost
prohibitive, often two to three times the price

DIESEL

THE TALLAHASSEE PUBLIC SAFETY COMPLEX


RUNS CAT POWER SYSTEMS.

In an emergency response, seconds can save lives. Leon County and the City of Tallahassee, FL
combined forces to dispatch the closest and most appropriate emergency service from one location
a 90,000-square-foot Public Safety Complex. To ensure that every call gets a response, even during
a loss of power, they needed reliable, redundant, backup power. They chose Cat power systems.
The Caterpillar teams understanding and commitment helped them develop a system that exceeded
the needs of the Public Safety Complex.

Learn more at cat.com/safetycomplex


2015 Caterpillar. All Rights Reserved. CAT, CATERPILLAR, BUILT FOR IT, their respective logos, Caterpillar Yellow, the Power Edge
trade dress as well as corporate and product identity used herein, are trademarks of Caterpillar and may not be used without permission.

| GAS | POWER PLANTS | RENTAL

MISSION CRITICAL POWER


FOR EMERGENCY RESPONSE

Motors For Mission Critical Facilities

of an induction motor, mainly due to the expense of the rare


earth magnets needed to achieve the flux necessary to produce
sufficient torque.
A new, flux focusing, PM motor design utilizes a unique conical geometry to solve the cost issue by allowing the substitution
of low cost, readily available ferrite magnets for the rare earth
magnets previously required. This new design provides rareearth-like permanent magnet motor performance and efficiency
at a price that is more comparable to induction motors.
Besides focusing flux, this new motor geometry has other
distinct advantages over both AC induction motors and conventional permanent magnet motors available today.
The compact bobbin windings on the laminated stator poles
eliminate end turns, reducing copper usage and associated I2R
motor losses.
The straight axial flux path also allows for the use of grainoriented steel, which reduces iron losses.
Hence, while significantly less costly, this new permanent
magnet design not only compares well with induction motors,
but is also more efficient than many of the conventional, higher
cost, permanent magnet designs on the market today.

APPLICATION TO 900 RPM AND


1,200 RPM MOTORS
While PM motors in general and the conical design in particular
produce significant efficiency gains against AC induction motors
at 1,800 RPM rating point, that advantage grows dramatically at
900 RPM and 1,200 RPM.
With this new design, modifications to the stator, without
changing pole count, readily converts the motors rated set
speed to points from 1,800 RPM to the equivalent of a 900 RPM
(8-pole) or 1,200 RPM (6-pole) AC induction motor. (2,400
RPM and 3,600 RPM models are also available).
The resulting lower speed permanent magnet motor possesses
the same high efficiency characteristics as the 1,800 RPM permanent magnet motor, but operates at a speed range closer to the
ideal fan design point. For example, a 5 hp, 900 RPM motor of
this type is rated at 93% vs. a typical 82% to 83% in an induction
motor of the same rated speed and power. And that percentage
distance grows as one turns the speed down from 900 RPM to the
400 to 600 RPM often required by the application.
This provides HVAC equipment engineers the opportunity to
optimally size and direct drive their fans at low speeds without
sacrificing (actually enhancing) motor efficiency, while, at the
same time, eliminating the costs, inefficiencies, and maintenance
associated with belt drives achieving unprecedented and cost
effective efficiency levels not available until now.

SUMMARY
Variable speed applications now make up nearly one-third of
new HVAC equipment designs. While turning down the speed
of a fan/motor reduces overall system energy consumption, it
increases the inefficiency of the induction motor, offsetting some
of the gains made.
Substitution of permanent magnet (PM) motors for induction
motors can eliminate that offset, but until now PM motors have
been prohibitively expensive.
New, conical stator/rotor geometry motors, eliminate the cost
obstacle.
The advantages in terms of both cost and energy savings are
even dramatically more significant when applied to 900 RPM
and 1,200 RPM models, thus opening the door for their economical application to low speed, direct drive fan applications.

REPRINTS OF THIS ARTICLE are available by contacting


Jill DeVries at devriesj@bnpmedia.com or at 248-244-1726.

Read this article online at


WWW.MISSIONCRITICALMAGAZINE/eNEWS
MC-NYNY-eNewsletter-Ad-0515.indd 1

34 |

Mission Critical

4/20/15 11:50 AM

SEPTEMBER/OCTOBER 2015

www.missioncriticalmagazine.com/andretwholden

Always at full speed


is no way to run a
cooling system.

Go here to see the video:


dataaire.com/gforceultra

Introducing the gForce Ultra:


Precise Load-Matching for Varying Demand
Data Aires new gForce Ultra Series CRAC Units are
now the most efcient way to cool a data center,
manufacturing facility or indoor growing operation.
They usher in a new era of variable capacity
performance while providing extensive energy savings,
efciency, scalability and increased precision.

from 2 to 34 tons ( 7-125 kW) depending on the


model, and can scale up or down in capacity to
meet demand.

Efciency and Energy Savings


The gForce Ultra uses less energy and saves money
with a turn down ratio of 4:1.

Increased Precision
Variable capacity technology quickly adapts to
cooling demands and fluctuating loads retaining a
precise set point. The gForce Ultra also effectively
manages humidity and regulates temperature
extending the lifetime of the cooling equipment.

Scalability
Units are customizable for exible capacity ranges

For full details on the gForce Ultra, download our


White Paper. dataaire.com/gforceultra

2 3 0 W e s t B l u e R i d g e A v e n u e , O r a n g e , C A 9 2 8 6 5 ( 8 0 0 ) 3 4 7 - 2 4 7 3 d a t a a i r e .c o m

Consider an optical tap to access critical network data.


By Jennifer Cline, RCDD
and Brian Rhoney
Jennifer Cline is the enterprise networks data
center global market development manager for
Corning Optical Communications in Hickory, N.C.
Jennifer started 15 years ago in technology as a
graduate from North Carolina State in Mechanical
Engineering. She has since held positions in
systems engineering, enterprise marketing, and
enterprise sales, including being a member of
the Global Accounts Team. For more information, please email
Jennifer.Cline@corning.com or visit www.corning.com/opcomm.
With over 10 years of experience at Corning Optical
Communications, Brian Rhoney has held positions
in product engineering, systems engineering, and
product line management. He is currently the data
center business development program manager
for Enterprise Networks. In 2005, Brian received
recognition as the Dr. Peter Bark Inventor of the
Year for Corning Optical Communications with
Pentagon Cable. He also received his professional engineers
license in December 2005. Brian graduated from North Carolina
State University with a Bachelor of Science in Mechanical
Engineering and a Master of Science in Mechanical Engineering.
He also received a Master of Business Administration from
Lenoir-Rhyne University. For more information, please email Brian.
Rhoney@corning.com or visit www.corning.com/opcomm.

36 |

Mission Critical

SEPTEMBER/OCTOBER 2015

ptical taps allow network and storage engineers


to gather valuable data analytics As the need to
monitor data becomes more prevalent, performance considerations for incorporating taps in
the data center cabling infrastructure should be
taken into account. Both insertion loss as well as system parameters such as bit error rate should be considered when evaluating
tap performance. While there is an inherent power penalty associated with using a tap, the structured cabling can be designed to
support bandwidth needs through the use of system models generated by standards committees. These models reveal the trade-off
between power penalties and supportable distance at a given data
rate. Using these models, along with consideration for design and
performance of system components, system designers are able to
successfully deploy optical taps for network monitoring.

WHAT IS AN OPTICAL TAP?


Optical taps are devices used to passively extract network data for
analysis. Typically, the data is used to monitor for security threats,
performance issues, and optimization of the network. An optical
tap includes an optical splitter, which splits off a percentage
of the input power and sends it to a monitoring device such as a
probe or analyzer. Taps can be used in both the Ethernet network
and storage area network (SAN); most commonly, split ratios of
50/50 are used in Ethernet systems and 70/30 split ratios are used
in Fibre Channel SAN systems.

Take a Bite Out


of the Heat with

Spot Cooling

If your Server Room is stifling hot, MovinCool offers up a full line of portable
Spot Cooling Air Conditioners proven to be toughest, most effective and
long-lasting in the industry. Fully self-contained, simply roll them in, set the
digital temperature controller and get back to work in cool comfort. We know
keeping your servers at optimal temperature is critical
to eliminating overheating and downtime thats why
MovinCool has the right solution for your application.
Remember: when youre in a hot spot,
dont sweat it, get MovinCool.

Learn more about staying MovinCool:


800-264-9573 I MovinCool.com/critical

Dont Let TAPs Handcuff Your Network


COMPONENT TYPE

NON-INTEGRATED TAP

INTEGRATED TAP

Loss (dB)

Quantity per Channel

Loss (dB)

Quantity per Channel

LC Matings

0.15 dB

0.15 dB

MTP Matings

0.35 dB

0.35 dB

50/50 MM Splitter

4.5 dB

3.8 dB

Total Channel Insertion Loss

5.8 dB

4. dB

TABLE 1. Channel insertion loss comparison.

FIGURE 1. Example of two patch panel link with Insertion of TAP module.

HOW DO TECHNOLOGIES SUCH AS


OPTICAL TAPS IMPACT CHANNEL
DISTANCE?
The use of taps introduces additional loss into a channel
due to the nature of the technology, splitting either 30% or
50% of the power from the transmission. As the practice of
inserting optical splitters (taps) into the cabling infrastructure
for network monitoring has become a common practice, the
question is how the power penalty impacts cabling channel
performance and distance. The loss can range from 1.8 dB
to 4.5 dB depending upon the split ratio and performance
or type of splitter used. Figure 1 depicts a two patch panel
cabling link connecting two devices. On one end of the channel a standalone optical tap has been introduced in the link to

FIGURE 2. Example of two patch panel link with Insertion of MTP


Integrated TAP module.

38 |

Mission Critical

SEPTEMBER/OCTOBER 2015

allow for monitoring of the network traffic for security and


performance. If a 50/50 split ratio multimode splitter were
placed in this channel, there would be an additional loss of
3.8-4.5 dB due to the optical split (depends on vendor specification), as well as the loss due to the LC matings at the input
and output of the tap.
For a 10 GbE link, IEEE specifies a maximum connector loss of 1.5 dB (maximum channel loss of 2.6 dB) to
support a maximum distance of 300 meters on OM3 fiber.
With a two module link (assuming each module is a 0.5 dB
specification) the overall link insertion loss would be 1.0 dB.
However, when incorporating a tap as shown in Figure 1, the
connector insertion loss increases by a total of 4.8 dB (4.5 dB
for splitter plus 0.3 dB for the additional two LC connector
pairs); adding that to the connector loss associated with the
MTP-LC modules gives a total insertion loss of 5.8 dB. This
does not mean that the channel will not support 10 GbE; it
simply means the distance over which 10 GbE is supported
is reduced from the original 300 meter maximum distance.
Utilizing the IEEE system model to exchange distance for
available dB loss, the above channel with an LC standalone
tap module included, supports a distance of 59 meters over
OM3 fiber and 73 meters over OM4 fiber. Hence the channel
distance capability is reduced by nearly 80% as compared to
the original non-tapped link.

HOW CAN I ACHIEVE LONGER


DISTANCES WHEN TAPPING?
To maximize distance capability, the connectivity design of
the tap itself can factor into the loss introduced into a tapped
channel. In a traditional tapped network today, a stand-alone
LC-based tap is typically utilized in the design, as previously
depicted in Figure 1. Alternatively, implementing an MTP
integrated tap module in the cabling infrastructure reduces
the number of connectivity components in the channel, thereby lowering the associated channel insertion loss. As shown
in Figure 2, an MTP integrated tap module has MTP port
connectivity for the input ports, allowing the use of it in place
of the traditional MTP to LC module and stand-alone tap.
Using a high-performance, integrated MTP based tap reduces the loss budget for the channel as shown in the comparison
in Table 1, where the number of LC matings in the channel
is reduced by two. Low loss performance connectivity is

Data Centers, Hospitals, Critical Care Centers,


Nursing Homes, Grocery Stores, Climate-Controlled
Storage Facilities, First Responder Facilities

Increase your generator run


%
time by 200 or more!
BECAUSE YOU NEVER KNOW WHEN
YOULL NEED STANDBY POWER.
Limited diesel fuel storage and irregular re-supply can often cut
emergency generator run time to 24 hours or less. The GTI BiFuel System can increase your run time by more than twice that
of conventional, all-diesel operation.

GTI BI-FUEL IS THE SOLUTION


This patented system draws low-pressure natural gas
into your diesel engine as supplemental fuel. It can
substitute up to 70% of your diesel fuel requirement,
extending generator run time and reducing your
dependence on diesel fuel re-supply. Proven in
thousands of installations, GTI Bi-Fuel is an easy
retrot requiring no engine modication.

Find a distributor near you or learn more about


GTI Bi-Fuel at www.gti-altronic.com.

GTI Bi-Fuel A Product


of the HOERBIGER Group

Dont Let TAPs Handcuff Your Network

FIGURE 3. Characterization testing scenarios.

assumed in both scenarios; in the traditional non-integrated


tap design, a splitter loss of 4.5 dB is used based on typical
solutions available in the market today. With the use of high
performance splitter technology in the tap module, the loss
impact of the tap is additionally reduced. In the case of a
50/50 MM splitter, the loss across the splitter is reduced to 3.8
dB. With a 1 dB reduction in total insertion loss, the distance
capability increases from 73 meters to 240 meters, over OM4
fiber at 10GbE. This results in nearly three times the distance
capability over the traditional non-integrated tap design.
Utilizing an integrated solution with high-performance
components reduces the impact of the additional loss experienced from the introduction of a tap into a system, yielding a
longer distance or reach.

SYSTEM PERFORMANCE VALIDATION


Up to this point we have only considered the insertion loss
impact of introducing a tap module into an optical link.

FIGURE 4. Waterfall BER curve with Corning thin-film splitter technology.

40 |

Mission Critical

SEPTEMBER/OCTOBER 2015

However, there are other penalties which can affect the signal
integrity such as jitter, differential modal delay, etc. These
transmission performance penalties cannot be captured by a
simple insertion loss, or power thru, measurement; instead,
measurement of bit error rate (BER) is required.
Not all multi-mode splitters utilize the same technology,
and this could have varying impact on the BER. TAPs available in the market typically use fused biconical taper (FBT)
technology for both single-mode and multi-mode applications. However, alternate technologies such as thin-film splitters are available for high performance multi-mode applications. Characterization testing of these two technologies
helps to define when transmission penalties are introduced
in the system due to differential mode delay, depending upon
the technology used in the splitter.
IEEE specifies a minimum receive power of -9.9 dBm in
order to maintain operation at acceptable BER levels of 10-12.
Characterization testing of systems with each of the abovementioned splitter technologies reveals performance differences
between the splitter types. To fully understand the implications
of FBT vs thin film technology, two systems should be evaluated, varying the placement of the splitter relative to fiber length.
Both of the system setups to be tested are depicted in Figure 3:
TAP module with 300 meters of OM3 fiber on the transmit
(Tx) before the split
TAP module with 300 meters of OM3 fiber on the receive
(Rx) after the split.
Prior to testing the above scenarios, a reference BER
waterfall curve must be generated. To create a reference
BER waterfall curve, the BER is measured over a short
length of fiber with a variable optical attenuator (VoA) and
tested over a range of receive power levels. When the
BER vs. receive power is plotted, the BER waterfall
curve is generated. After creating a reference waterfall
curve at a very short length, a longer length of fiber
(300 meters) is tested, and BER is again measured over
a range of receive power levels until the BER reaches
10-12. Power penalties such as differential mode delay
will cause this waterfall curve to shift to the right, with
a BER of 10-12 occurring at a slightly higher receive
power threshold.
Once both reference waterfall curves are generated,
the above test setups are measured, with 300 meters of
fiber placed on each side of the splitter, and on each
splitter output leg (70% and 30%). The results of these
measurements compared to the reference curves indicate any effects of the splitter performance on BER.
The desired output is for the BER curves of the splitter
outputs to coincide with the 300m reference curve,
indicating no additional penalties (other than insertion
loss) incurred when introducing a splitter in the link.

THIN FILM (HIGH PERFORMANCE) MM


CHARACTERIZATION
When generating the short fiber length reference curve for test
equipment validation, the acceptable BER (10-12) threshold
occurs at approximately -14 dBm, showing a 4 dB margin
compared to the -9.9 dBm specification stated in IEEE 802.3
for 10 Gigabit Ethernet. As expected, when the reference fiber
length is increased to 300 meters, the BER curve shifts to the
right due to length dependent effects such as differential mode
delay. The acceptable BER (10-12) threshold occurs at approximately -12.8 dBm, well within the minimum requirement of
-9.9 dBm stated in IEEE 802.3 for 10 Gigabit Ethernet.
System setups 1 and 2 were characterized with measurements taken on both the 30% and 70% output legs, resulting
in four total scenarios measured against the 300 meter reference waterfall curve:
FIGURE 5. Waterfall BER curve with FBT splitter technology.

TAP module with 300 meters of OM3 fiber before the splitter, measured on the 70% output leg
TAP module with 300 meters of OM3 fiber before the splitter, measured on the 30% output leg
TAP module with 300 meters of OM3 fiber after the splitter,
measured on the 70% output leg
TAP module with 300 meters of OM3 fiber after the splitter,
measured on the 30% output leg
The waterfall BER curves of these four scenarios, shown in
Figure 4, overlay directly on the 300 meter reference system
curve with no tap (or splitter). This indicates that the splitter does not introduce any BER penalties in the systems. As
shown in the below graph, both output legs from the splitter
(70% and 30% outputs) have the same performance with
respect to BER, indicating there are not modal effects induced
through the splitter. The performance is also shown to be the
same whether the 300 meter length of the system is inserted
before or after the split, eliminating possible concerns over
BER effects due to placement of the splitter in the system.

FBT MM CHARACTERIZATION
For comparison, an FBT technology used in many of the
MM taps available in the market was tested in each scenario
as well. As shown in Figure 5, the BER waterfall curves
vary for each of the systems, dependent on where the TAP
is placed relative to the 300 meter system length (before or
after the splitter) as well as variation between the output
legs (70% vs. 30%) of the splitter. This disparity in BER
seen between the output legs of the splitter indicates that
there are variations in the power distribution across the
splitter, resulting in additional penalties for each splitter
output. As shown in Figure 5, on the 30% leg output the
acceptable BER rate of 10-12 occurs at a receive power
level of -11 dBm, which provides only 1 dB margin from
the IEEE specification.

The characterization testing completed verifies that a


TAP module using the high performance thin film technology does not introduce BER penalties. Although these
additional penalties are not measurable in the field by a
traditional light source and power meter attenuation measurement, they affect the transmission signal. Thus, it is
important to consider the type of splitter technology being
deployed in the network link in order to ensure there will be
no negative impact on link performance.

SUMMARY
There are many factors to take into account when designing data center cabling infrastructure. Optical taps are often
used in order to obtain a copy of network traffic for the purpose of monitoring. Due to the insertion loss introduced by
the taps, the overall link distance is reduced, sometimes by
nearly 80% of the original non-tapped length. This length
impact can be minimized by selecting high-performance tap
modules which integrate the taps into the structured cabling.
It is also important to consider the construction of the splitter to insure that a thin film technology is being utilized in
multi-mode links to guarantee no additional BER penalties
are introduced to the link. With these considerations in
mind, system designers are able to successfully deploy optical taps when monitoring network links, with no negative
link effects other than shortened distances.

REPRINTS OF THIS ARTICLE are available by contacting


Jill DeVries at devriesj@bnpmedia.com or at 248-244-1726.

Read this article online at


www.missioncriticalmagazine.com/jennifercline
www.missioncriticalmagazine.com/brianrhoney

SEPTEMBER/OCTOBER 2015

www.missioncriticalmagazine.com

| 41

The Internet of Things (IoT) is coming to data center


infrastructures near you.

onverged infrastructure. Hyperconvergence. Centralized resource management, consolidated systems, increased resource utilization rates, and
reduced costs.
Sounds like one of the hottest trends in IT, but
in this case, it isn't.
It's the new mantra of data center facility management as the
Internet of Things (IoT) takes hold.
Rather than IT servers, data storage devices, and networking
equipment, converged infrastructure is the bones and muscles of
any mission critical facility HVAC, security and safety, and
critical power management systems (CPMS).
IoT is facilitating data center infrastructure management (DCIM)
and can relieve a host of facility managers' pain points. Optimizing data center performance for efficient use of equipment and
floor space, improving power reliability and efficiency (PUE and
DCIE), ensuring operational continuity, managing the increasing

By Bhavesh Patel
Bhavesh Patel is vice president of global marketing for
Emerson Network Power's ASCO Power Technologies
business.

complexity, multiplicity, and sizes of facilities from motherships


to mini's, and improving reporting and compliance are examples.
The overall magnitude of change can be mind boggling. Gartner1 projects there will be about 25 billion IoT-connected devices
by the end of this decade.
Just last year, Cisco CEO John Chambers said the IoT will grow
to be a $19 trillion market during the next several years. Earlier
this year, IBM committed to investing $3 billion on an IoT division over four years that will concentrate on the enterprise. Big
Blue also is constructing a cloud-based open platform that will
enable clients and partners to build IoT solutions.
As IoT continues growing, data center infrastructures will need
to keep pace in terms of compute capacity, sophistication, and
required resources. Otherwise, data center owners and users could
see the $19 trillion opportunity predicted by Cisco's Chambers
evaporate. From a critical power perspective, keeping pace is
essential to ensure efficient consumption for IT equipment and
HVAC, and to minimize downtime to meet SLAs.
Those embracing the IoT know about operational issues sooner,
make decisions faster, and take action more insightfully. The inescapable truth is that DCIM as a professional discipline needs and
wants the change. In fact, it may be unavoidable.

POWER METERS AND BASKETBALLS


The things part of the IoT is components, devices, and products,
such as power meters and basketballs (really), that become smart

42 |

Mission Critical

SEPTEMBER/OCTOBER 2015

when integrated with a technology stack. Multi-layer, microinfrastructure stacks comprise sensors, microprocessors, compute capability, data storage, batteries, wireless network connectivity, and even embedded operating systems.
The devices have local intelligence and compatible, twoway communication pathways, and, ideally, streamlined network topology protocols that eliminate repetitive wrapping
and unwrapping of data.
While smart technology has been available for a while,
innovation has lowered its cost, making it economically
viable for widespread application, thus fueling IoT growth.
One example that's practically pass is home lighting that
can be controlled from a smartphone. Another are the basketballs mentioned earlier. With a built-in technology stack,
they collect data on shooting arc, dribble intensity and speed,
shot release speed, imparted backspin, and other factors. A
smartphone with the necessary app displays the results.
Ralph Lauren, Inc. says the company will debut a polo
tech shirt this year that "tracks and streams real-time biometric data directly to your smartphone or tablet." Amazon
has taken IoT a step further. It sells field-retrofittable dash
buttons that do not have to be integrated into a product.
An example is a Tide detergent button that can be stuck to a
washing machine. When the consumer is low on Tide, he or
she presses the button, which automatically orders more Tide,
or whatever product it represents, from Amazon. Low-power
microcontrollers and wireless connectivity make it possible.

SMART DCIM PRODUCTS ARE HERE


Can smart DCIM components and devices be far behind?
Some say they are, in fact, already here. Smart HVAC and
lighting devices and components facilitate management of
those systems. CPMSs for data centers are realizing IoT's
potential today. Smart CPM components adjust to their environments or operating condition. Rather than run to failure,
they call for maintenance.
Even though IoT-enabled CPMSs are here, it doesn't mean they
have universal acceptance. Facilities decision makers responding
to a national survey about CPMS monitoring and control shed
light on the capabilities they have and those they want.
More than two-thirds of respondents either have, or would
like to have, monitoring capability from their CPMS. More
than half either have, or need, control and reporting capabilities from their CPMS.
Almost half of those who have control and reporting capabilities also have some sort of integrated system to manage it.
About 45% of respondents have some type of power quality
monitoring and analytics.
No doubt, there's room for improvement.

FIGURE 1. A power control center can display, monitor, and manage


multiple critical power systems simultaneously.

lighting, safety and security, and building management systems. That dynamic already is creating interconnected facility management systems called clusters. The clusters, comprising hundreds or thousands of sensors, can be designed for
a single building, multi-building campus, or geographically
dispersed facilities.
Each cluster features detailed monitoring, measurement,
and control capabilities, and each feeds overview and status
information to an overarching building management system
(BMS). The BMS orchestrates policy decisions using the
aggregated data. Such capabilities make facility networks
look like IT networks.
It wasn't always that way.
Not that long ago, Ethernet serial networks with web
access were cutting edge. Only HVAC, fire, and security had
connectivity with building management systems, but interaction among products wasn't widespread. The range of products and their capabilities varied widely. Some may have had
simple status annunciation, while others may have included
monitoring capability. Most provided only data. On-the-fly
analysis was left to operators.

MEET THE 'CLUSTERS'


That will materialize as smart products penetrate more facets
of critical power management systems, as well as HVAC,

FIGURE 2. Critical power systems in multiple locations can be monitored and controlled from a single control center.

SEPTEMBER/OCTOBER 2015

www.missioncriticalmagazine.com

| 43

Changing the Face of Facility Management


have made inroads on building their own. Mission critical
facility managers can evaluate such systems as a total solution for managing a data center cluster.
Whether they're clusters or ecosystems, one thing they
have in common is generating enormous volumes of data.
Big Data. In fact, the exploding number of sensing devices
that share data will continue adding volume. As devices
with sensing and actuation capabilities become practically
ubiquitous, global adoption of Internet Protocol Version 6
(IPv6) will be essential. Managing the volume and the variety of data that almost always have different structures and
meet various standards will be challenging.

DATA: FRIEND OR FOE?


FIGURE 3. Dynamically visualizing a critical power system helps operators quickly and easily assimilate data.

ECOSYSTEMS AND
HYPERCONVERGENCE
But even IoT-enabled clusters have room to improve. To
truly optimize IoT capabilities, clusters are already evolving into ecosystems. It's a facility infrastructure's version
of IT hyperconvergence. While convergence represents
bundling a variety of components into a cluster, hyperconvergence includes a management interface for components
designed to work together.
Perhaps the best-known example of an ecosystem is Apple
products the iPhone, iPad, iPod, accessories, and apps.
Designed to provide an exceptional customer experience, the
Apple ecosystem makes consumers unlikely to switch to an
Android ecosystem. If you don't think this is important, consider Blackberry. The company didn't really develop its own
ecosystem and lost customers in droves, while Apple continues dominating its market, even with fierce competition.
Ecosystems are shaping up in the data center arena, too,
as some critical power management system manufacturers

Not only will facility managers be seeing more of this data


than ever before, they will be expected to manage it. That has
important consequences because data can be a facility manager's
best friend or worst enemy. The key is to be able to interpret it.
Interpreting data correctly and quickly is what imparts
value to it. It's an integrated, three-step process: monitor,
predict, and improve. It starts with giving operators accurate,
real-time, quantifiable data, cycle by cycle, in milliseconds.
Real-time data helps operators better understand product
performance. But, they must be able to assimilate it easily,
which requires dynamic visualization, that is, translating
numerical data into graphs, charts, and even pictograms.
Amassing data over time provides analytical opportunities to identify performance characteristics of a variety of
devices and components and the relationships between a
building and its IT systems. Monitoring a DCIM's power
infrastructure, for example, can precisely determine power
usage effectiveness (PUE), cooling system energy efficiency, and overall power quality.
"Power quality analytics can be used for trending and predicting growth," said Junnaid Malik, an electrical engineer at
Cosentini Associates. "You may want to know where you are
experiencing current level voltage distortion. You may be a colocation facility and want to plan for growth by adding servers."

THE REAL PAYOFF

FIGURE 4. Power analytics reports can help identify and resolve operational issues.

44 |

Mission Critical

SEPTEMBER/OCTOBER 2015

When out-of-parameter performance behaviors occur,


operators have the necessary information to diagnose the
issues and take corrective action. The real payoff, however,
can be the insight gained from this experience to actually
predict performance issues and decide on preventive measures by changing operational parameters, or servicing the
device or component. Predicting and preventing helps make
a facility's power infrastructure more reliable and efficient.
For critical power management systems, the monitor,
predict, and improve process facilitates reducing energy
consumption, projecting capacity requirements, streamlining maintenance, resolving operational issues, and meeting
reporting requirements.

Data that has value is data that requires protection. From


a facility operations perspective, its continuity needs to be
assured by a self-healing network that avoids disruptions or
overcomes disruptions instantaneously. From a hacking perspective, its security needs to be assured by multiple levels
of encryption application layer, native database object,
network, and point to point. Programmable cryptography,
digital certificates, and time stamping also should be part of
a data protection program.

WORKS CITED

1. Gartner, "The Potential Size and Diversity of the Internet of Things Mask Immediate Opportunities for IT Leaders." http://www.gartner.com/technology/reprints.do?id=11SIIEJT&ct=140401&st=sb
2. National Power Monitoring & Control Survey of 15,000 facility management personnel sponsored by ASCO Power
3. Emerson Network Power, Emerson Network Power Helps
Top 10 Investment Bank Operate a Power Chain 900 Miles
Away with ASCO PowerQuest. http://www.emersonnetworkpower.com/en-US/About/NewsRoom/NewsReleases/
Pages/Emerson-PowerQuest-Banks.aspx

REPRINTS OF THIS ARTICLE are available by contacting


Jill DeVries at devriesj@bnpmedia.com or at 248-244-1726.

FIGURE 5. Trending critical power parameters helps identify and avert


potential problems.

Read this article online at


www.missioncriticalmagazine.com/bhaveshpatel

We put the
control in
access control

As the demand for secure and uninterrupted data grows, the challenge to keep up with regulatory
demands, earn your customers trust and safe-guard human life increases. The physical entrances
in your facility must support these directives 24/7. As your entry experts, we offer entry solutions
that work with access control systems to ensure that only one person, and the right person, enters
the secure area at a time. We control the gateway to the most critical of destinations your reputation.
Visit us at Critical Facilities Summit at Booth #706.

BEI_Advert_DataCentres_7x4,875in_Mission Critical_AUG2015.inddSEPTEMBER/OCTOBER
1

2015

21/07/15
www.missioncriticalmagazine.com

|16:35
45

It's a matter of safety.

dont suppose that many of you would expect the


phrase moral imperative to appear in a serious data
center related discussion but, sometimes, technological capability must be tempered by human considerations. Now dont get me wrong, we are decades away
from the days in which the disposability (literally) of human
capital was viewed as a calculable expense on the part of many
firms, but even today, we must sometimes step back and ask ourselves if our processes and procedures remain in synch with our
ability to protect our own employees. In terms of data centers,
architectural changes and the increasing concerns of downtime
prevention and speed of implementation increasingly call for

By Chris Crosby
Chris Crosby is a recognized visionary and leader in
the data center space and the founder and CEO of
Compass Datacenters. Chris has over 20 years of
technology experience and over 15 years of real estate
and investment experience. For the first 10 years of his
career, Chris was active in international and domestic
sales, sales management, and product development at
Nortel Networks, a major supplier of products and services that support
the internet and other public and private data, voice, and multimedia
communications networks. Mr. Crosby received a B.S. degree in
Computer Sciences from the University of Texas at Austin. Chris is also
an active member of the Young Presidents' Organization (YPO).

46 |

Mission Critical

SEPTEMBER/OCTOBER 2015

us to evaluate if we are successfully providing for the physical


protection of our employees through our adherence to published
standards.

THATS A LOT OF ELECTRICITY


If we attempt to simplify the data center environment into its primary components, we find that it consists of two basic elements:
the raised floor and the power systems used to support it. Think
about it. We typically size data centers in terms of their power
capacity its a 3MW facility and what this translates into
is that there is one heck of a lot of electricity coursing through
these places. And, since raw electricity and the human body are
often incompatible with each other, a number of work place standards have been developed by regulatory agencies such as OSHA
and standards organizations like the National Fire Prevention
Agency (NFPA) to promote safe working environments.

DEFINING A SAFE WORKING


ENVIRONMENT
Over the past few years, new standards have been developed
such as the National Electrical Code that covers areas such as
higher voltages, battery installation, and even modular data centers. The NFPA through its published standards 75 (covering risk
assessments, aisle containment, and fire detection/suppression)
and 70E that specifically covers workplace electrical safety has
also provided clear guidance for data center operators on how

The data center airflow management system


Finance and Operations can agree on.

Reliability
Operating cost

Tate Airflow systems slash cooling costs without jeopardizing uptime.


Sensibly-engineered airflow management solutions from Tate enable automatic
balancing by delivering the right amount of air, to the right place, at the right time.
Every time. All with real-time rack level monitoring and control. Cant argue with that.
Learn more today.

tateairflow.com

Airflow Panels | Airflow Controls | Cabinets | Containment | Accessories | Optimization Services

The Importance of Codes and Standards


to operate a safe worker environment. The problem: too many
data centers dont abide by them.

ARC FLASH: A REAL COST OF


NON-COMPLIANCE
As a brief refresher, the National Institute of Occupational
Safety and Health (NIOSH) defines an arc flash as the sudden
release of electrical energy through the air when a high voltage
gap exists and there is breakdown between conductors. While
this certainly doesnt sound good, when viewed from a real-life
perspective, the results can be absolutely catastrophic with the
production of enormous pressure, sound, light, and heat. For
those unfortunate enough to be anywhere near an arc flash can
see heat reach 35,000 four times hotter than the suns surface.
Those same workers can also be exposed to molten shrapnel and
burns. Vision and hearing loss can also be common physical
results of the aftermath.
The logical question to be asked at this point is why is arc
flash becoming such an outstanding issue? Rather than a single
reason that can easily be identified this increasing risk of arc
flash incidents within the data center are the result of a combination of factors. From an architectural perspective, the increase of
phased modular data centers has increased the risk of arc flash
due to the adding of equipment, post commissioning, to live
(energized) backplane without a shut down. This issue is exacerbated by the high cost of downtime that increasingly has generated a desire on the part of facilitys operators to make changes
or modifications without shutting anything down. Thus, more
work is performed in a live environment than can safely be justified, with the line between troubleshooting and actual maintenance in a live environment becoming increasingly blurred.
Fortunately, preventing the potential for arc flash incidents is
more a function of education and process than physical modifications to the site. For many organizations this means staying
current on the arc flash assessment portion of their electrical
safety procedures. These assessments should require specific
guidance on what equipment can be worked on, by whom, and
in what state. This level of specificity provides the foundation
required by many data center managers to ensure that potentially
hazardous activities are correct and that safety is the primary
consideration for the work being performed as opposed to making potentially hazardous decisions in the desire to minimize
downtime.
Ultimately, per NIOSH, The organization has a responsibility
in preventing arc flash injuries Organizations have the duty
to provide appropriate tools, personal protective equipment, and
regular maintenance of equipment and training. A commitment
to training is a commitment to safety.

DESIGN MATTERS
As previously discussed, the business pressures associated with
the avoidance of downtime often combine with the design limitations of facilities to be the primary causes of accidents that can

48 |

Mission Critical

SEPTEMBER/OCTOBER 2015

be harmful to the sites operational personnel. Unless a facility


is specifically developed to support concurrent maintainability,
equipment must be shut down to safely perform maintenance
activities. Unfortunately, many data center providers attempt
to mask this requirement through what can only be described
as creative marketing. Terms like phased builds or multitiering all amount to the same thing the site has a single
backplane that requires that all attached data centers or modules
must be brought down to avoid having to work on energized
equipment.

TIER CERTIFICATION MATTERS


Since concurrent maintainability (CM) is the only way to allow
maintenance activity to be performed on a piece of equipment
without having to take down the entire site, Tier III certification (that requires CM) is becoming increasingly important. It
is important to note that certification in this case applies to the
actual constructed facility as opposed to just its design. Many
providers obtain Uptime Institute (UI) certification for the paper
plans, but deliver a final product that no longer meets certification requirements. Due to these disparities between what is
marketed and what is actually delivered, prospective end users
should insist on receiving the actual construction certification
documentation from UI.

IMPACT
The vast majority of existing data centers are not concurrently maintainable or standards compliant. Correcting these
situations will require significant upgrades. In some instances
providers may elect to take their chances by maintaining the
status quo within their sites. The decision to do so will be risky
indeed, presenting operators and endusers with a variety of
potentially adverse consequences including: down time, safety
breaches, poor publicity, increased regulatory scrutiny and
fines, or much worse.

SUMMARY
Obviously, the demands of the data center industry continue to
press providers and end users to maximize uptime and deliver
capabilities faster than ever before. These requirements should
not be incompatible with safe operation. Standards exist for a
reason and compliance should never be a matter of convenience.
There is no higher responsibility for any data center provider
than the safety of its employees, and that is a real moral
imperative.

REPRINTS OF THIS ARTICLE are available by contacting


Jill DeVries at devriesj@bnpmedia.com or at 248-244-1726.

Read this article online at


www.missioncriticalmagazine.com/chriscrosby

my CLEARopinion

SHAPE THE FUTURE OF YOUR INDUSTRY

myCLEARopinion members are a diverse community of industry professionals who earn


CLEARcash for sharing their opinions, reactions and insights with leading national and global
organizations. Their insights help make better products, advertising and messaging decisions.
Joining myCLEARopinion ensures that your opinions are worth something. Begin making YOUR
opinions count today and earn CLEARcash rewards for your time.
Visit www.myclearopinion.com to join now!
2401 W. Big Beaver Rd. Ste. 700 | Troy, MI 48084 | support@myclearopinion.com

CLEARcash

Get a little peace of mind.

n the current age of information and technology, the


safety of your data is more important than ever. In
addition, the electrical system supporting your data is
its lifeline and ultimately responsible for its security.
Its no wonder then that incorporating an uninterruptible power supply (UPS) into data storage systems is considered
a good security strategy. The UPS is the most critical component
in protecting the integrity of your data, regardless if its merely
providing sufficient opportunity to close your PC, laptop or
notebook securely, or if it is functioning as a stopgap measure
by supplying power before a back-up generator takes over. A
maintenance plan for your UPS provides that peace of mind. No
matter what your industry is, your UPS must be taken care of by
a maintenance plan.
A recent study conducted by The National Survey on Data
Center Outages stated that the typical consequence of interruption was $1.7 million per year, costing $7,900 per minute,

By Kyle Tessmer
Kyle Tessmer is inside sales coordinator, Service
Department of the UPS Division of MEPPI (Mitsubishi
Electric Power Products, Inc.). Kyle has worked for
MEPPI since July 2013.

50 |

Mission Critical

SEPTEMBER/OCTOBER 2015

and these numbers are projected to grow rapidly. Despite the


importance of a companys equipment, most every organization
in the study had at least one outage in the past two years, averaging 2.48 complete shutdowns over the two-year period, with an
average duration of 107 minutes. The duration of the company
outage correlates to lack of resources and planning, as only 37%
of applicants agree there are ample resources to keep their data
center fully functional if there is an unplanned outage.
Research indicates that regular preventative maintenance can
extend your UPS units life cycle and can alert you to potential
problems before they become significant issues. That same
research also concluded that UPSs that were properly maintained
were significantly less likely to succumb to any downtime; in
fact, customers without the recommended two maintenance visits per year were highly vulnerable to equipment malfunctions.
Most UPS system failures can be categorized by six symptoms: Failure of the DC source (batteries), improper grounding
systems, distribution system faults, poor maintenance practices,
incorrect distribution coordination (DCF), or human error. Surprisingly, more than two-thirds of downtime events stem from
a preventable cause. Through systematic inspections, a maintenance program ensures that the numerous parts of the UPS are
thoroughly evaluated, cleaned, tested, and calibrated. A successful maintenance plan takes into account the age of the UPS and
helps customers budget for major replacement intervals.

YEAR
1

10

11

12

13

Preventive Maintenance

14

15

0.15

UPS PM

Battery PM

Replacement Parts
Air Filter
Cooling Fans

Electrolytic Capacitors

AC Filter Capacitors

Control Relays

Contactors

Printed Circuit Boards

Control Power Supply

LCD

Fuses

Thermal Relays

Battery (VLRA)
Battery (Wetcell)

Indicates Mitsubishi PM schedule/replacement timeline


Indicates industry standard PM replacement schedule

TABLE 1. The typical parts and maintenance schedule of a Mitsubishi UPS with a 15-year lifespan vs. the industry standard. Mitsubishi recommendations are red; the industry standards are blue.

WHAT SERVICE IS BEST FOR YOU?


Preventative maintenance can be defined as the use of
instruments and analysis to determine equipment condition
and to perform corrective measures in order to predict failure before it takes place.
Choosing a service provider can be a daunting decision.
Some customers prefer a contract with an independent vendor, while others choose a service contract or extended warranty from the UPS manufacturer. A number of companies
employ engineers who are able to service the UPS; others
choose to engage service only when an issue arises. All
these options have advantages and disadvantages. No one
choice is the best solution for every organization.

Do I have any budget constraints for the UPS service?


How much maintenance do I need for what I can afford?
What service package is recommended by the manufacturer?
Have I budgeted for replacement parts, planned or
unplanned (battery, capacitors, fans)?

QUESTIONS FOR CHOOSING A


SERVICE PROVIDER AND PLAN
If the UPS fails, what is the cost of downtime to my company?
How critical is power to my application? Is it an inconvenience or would I lose sales or shut down critical servers?
What response time do I need in an emergency situation?
How many trained field technicians are in my area who
can specifically service my UPS model? Also, do they
carry spare part kits?

FIGURE 1. The payoff of preventive maintenance includes increasing


uptime.

SEPTEMBER/OCTOBER 2015

www.missioncriticalmagazine.com

| 51

The Payoff Of Preventive Maintenance


What is my risk tolerance in terms of a UPS failure and
what happens if the UPS fails?
Regardless of answers to the above questions, a preventative maintenance plan will save you time and money by
minimizing interruptions and maximizing uptime, along
with enhancing your overall return on investment. The following are various service option considerations.

OPTION A: OEM SERVICE CONTRACT


A service contract with the manufacturer of your UPS
offers a number of benefits. First of all, purchasers obtain
the extensive knowledge, expertise, and capabilities of
factory-trained field technicians who receive continuous
training from the manufacturer of specific UPS systems.
This results in knowledgeable technicians who have current
and comprehensive information regarding the functionality
of your UPS along with the latest software and upgrade
kits to maintain peak-performance levels. Field technicians
also possess advanced troubleshooting capabilities and
techniques, reducing repair time.
Routine maintenance through OEM service will provide history that can be trended over time to predict underperforming
parts, batteries, and end of equipment life. The recommended
parts replacement schedule can be completed during routine
maintenance visits, and these changes are documented over
time. A routine maintenance program will also provide documentation and validation for any warranty claims.
In addition to a national infrastructure of field technicians, technical support personnel and engineers, UPS
manufacturers possess a higher number of field personnel
and office resources. Included in this are risk programs that
can be overlooked by customers, such as safety protocols
and levels of insurance.
The manufacturers technicians also have the advantage
of quick access to spare parts. The spare parts are kept
either in stock in a van or central location, ensuring that
an issue is resolved immediately. Many service plans offer
discounts on these spare parts and upgrades, further reducing the overall cost.
To meet customer-specific needs, UPS manufacturers
offer a variety of service plans, including preventative
maintenance, extended warranty, and parts/labor coverage.
Various features can also include 24x7 coverage, quarterly
maintenance visits, remote monitoring, and response times.
Although the service may be priced slightly higher than
that of an independent service company, the advantages that
only a UPS manufacturer can provide may outweigh the
additional costs.

OPTION B: INDEPENDENT SERVICE


PROVIDER
These businesses provide services for UPSs, such as maintenance, start-up installation, or emergency services. Inde-

52 |

Mission Critical

SEPTEMBER/OCTOBER 2015

pendent service providers are generally priced lower than


a manufacturer, although they may have fewer resources
available and may not be trained on your particular model
of UPS.
An independent service providers field technicians generally have been trained on a specific product or brand but
are not certified by the manufacturer. Important to note:
unauthorized service work on your equipment will void the
warranty. UPS products are continuously updated and modified. For that reason, if a technician has not been trained by
the manufacturer, he or she may not have the knowledge
to service the UPS properly. This can result in hazardous
conditions and potential load loss. Please remember: if a
potential service providers authorized status is in question,
you should contact the OEM and verify the status.
Generally, independent service providers will contact
a UPS manufacturers engineers and technical support
experts in order to back up their own field teams. To obtain
spare parts, these providers will contact the equipment
manufacturer and will typically give the customer a longer
than expected lead time for the parts arrival onsite. From
time-to-time, the OEM will release software updates for
your UPS, which an independent service company will not
have access to and could leave your equipment prone to
failure. In addition, the service providers safety records
and insurance requirements may or may not be kept at
acceptable levels.
While independent service providers do not generally
offer a written guarantee from the UPS maker, they do offer
preventive maintenance with a variety of service levels.

OPTION C: SELF-MAINTENANCE
While self-maintenance is a service option, it is not recommended by the majority of OEMs, as service on this equipment should be left to a factory authorized technician.
If a company has an internal resource with sufficient
safety and electrical skills, it may elect to maintain the UPS
system in-house. The most important part of self-maintenance is ensuring you have an effective plan in place and
that you have the necessary skills for in-house maintenance.
First responder training is available to all customers. This
training can enable a skilled person to understand the operation, safety, and environmental concerns and basic preventive maintenance for your UPS. In addition, the designated
person must understand the alarm conditions and required
responses for specific events, along with the precise steps to
start and stop a UPS in various scenarios.
Spare parts kits are available through the manufacturer
and can supplement any service plan for their equipment.
It is important, however, that an organization has access to
a service provider for critical repairs or in case of an emergency situation.

OPTION D: TIME & MATERIAL


The time and material option (pay as you go) is an approach that
some customers elect to take and only call for service periodically. This option may make sense if a service contract is not
available for your UPS. However, the tactic may not make economic sense if you have a more complex system.
Time and material (T&M) are available at any time, for all
customers. Typically charged per hour of labor, with a minimum
time frame, T&M rates may vary depending on the maintenance
window; this can include after-hours or weekend services.
Response times can vary but would typically be best effort,
with no guarantee of arrival, as contract customers are given
priority.
A downside to T&M is that replacement parts are typically
more expensive; contract customers are given discounts off the
list price of parts and labor.

Regardless of the process you choose, some form of maintenance


is crucial to maximizing uptime and the effectiveness of your
UPS.
Without proper maintenance, a UPS will eventually deteriorate and can expose the facility to an equipment malfunction
or failure. Regularly scheduled maintenance for the UPS will
ensure equipment reliability and benefit the organizations bottom line.
Preventive maintenance objectives are to maximize uptime
by making necessary repairs as necessary. Routine maintenance
will provide records of your equipment performance and allow
you to budget for replacement intervals, reducing or eliminating
downtime. An effective maintenance plan should be implemented sooner rather than later and can significantly reduce the total
cost of ownership (TCO).

REPRINTS OF THIS ARTICLE are available by contacting Jill


DeVries at devriesj@bnpmedia.com or at 248-244-1726.

CONCLUSION
UPS technology is advancing and has significantly improved.
With this expansion, it is critical to ensure that your system is
supported by a maintenance plan. The benefits of preventative
maintenance are something you should be aware of as a consumer; there are advantages and disadvantages to each option.

Read this article online at


www.missioncriticalmagazine.com/kyletessmer

SEPTEMBER/OCTOBER 2015

www.missioncriticalmagazine.com

| 53

Put your UPS in the eye of the storm.

n the context of a data center, it is hard to be hyperbolic when describing an event as a disaster. An
event as commonplace as a spring thunderstorm can
be enough to wreak havoc: flooding, fallen trees,
lightning strikes, and power outages are all possible.
One ever-increasing threat to the data center environment is
grid stability. In the last 30 years, grid-level power outages in
the United States have increased nearly 300%, with over 3,600
blackouts costing American businesses $150 billion dollars in
2014. Aging infrastructure, increased power demand, and more
erratic weather systems will cause those numbers to continue to
rise. Hardening your data center against power loss is critical to
maintaining up-time.

BRIDGING THE GAP


When auditing the disaster recovery plan of a facility, the ability
to survive a loss of power is crucial. Most facilities with critical

By Emilie Stone
Emilie Stone is the general manager of Methode Active
Energy Solutions located in Boulder, CO. She brings
nearly a decade of experience in automotive design
and manufacturing to data center equipment, helping
engineer reliability and robustness.

54 |

Mission Critical

SEPTEMBER/OCTOBER 2015

equipment will be equipped with an emergency generator sized


to run critical equipment for the desired time, usually at least
an hour. However, gensets particularly those large enough
to back up an entire facility can take 60, even 90 seconds to
come online. The power supplies on the front ends of a typical
server are only designed to have 12 milliseconds of ride-through
time. Uninterruptible power supplies (UPS), which are often
provisioned solely for back-up power, are increasingly being
deployed as ride-through devices. Both line interactive and
double conversion UPS equipment can provide AC via the battery well under the 12 milliseconds of ride-through on the power
supply of a server. When sized correctly, a UPS can be the perfect device to cover both short-term power dips and blips, as well
as longer outages that require a genset.

TRUSTING YOUR POWER LOSS PLAN


Given the ever-increasing number of outages, it is important to
not only deploy the equipment required to survive an outage, but
to ensure that it is working properly. Disaster preparedness advisors typically recommend testing power back-up equipment on
a monthly basis and running at expected full load on a quarterly
basis. For a UPS, this means running a battery test. A battery test
can be scheduled through the standard interface of most UPSs.
The test entails discharging the battery and comparing actual
capacity to expected or nameplate capacity.
One caveat to the recommendation to regularly test a UPS bat-

REQUIRED CAPACITY

VOLUME

WEIGHT

Lead acid

500 W/hr

128 in3 (2.1 L)

9.3 lbs (4.2 kg)

Lithium-ion

125 W/hr

17 in3 (0.3 L)

1.5 lbs (0.7 kg)

TABLE 1. Comparison of lead acid and lithium-ion batteries for 2,000 cycles at 125 Watt/hrs.

tery is that the very test designed to provide state of health


will actually degrade battery state of health in lead-acid
systems. UPSs designed with new lithium-ion battery technology have up to 300% the cycle life of lead-acid batteries
at deep discharge. In addition, lithium-ion batteries have
integrated battery monitoring, meaning battery voltage and
capacity can be measured and reported in real-time without
requiring a full discharge-charge cycle. This translates to
the UPS battery remaining at or near full charge all of the
time, thus reducing risk associated with an outage overlapping with a back-up system test.

A BETTER BATTERY FOR THE JOB


Lithium-ion batteries offer other advantages in the event of
an outage. Lithium-ion has three times the energy density
(by both weight and size) of lead acid and nearly identical
cycle life whether utilizing the UPS battery at 30% depthof-discharge or 80% depth-of-discharge. These two characteristics combine in very meaningful ways for a data center
manager looking to mitigate power outage risk.
First, a lithium-ion battery for a given power and duration requirement can be much smaller than a lead acid
battery deployed in the same application. Not only does the
lithium-ion battery pack more energy into a smaller package, its tolerance for high depth-of-discharge means it has
more usable capacity.
A smaller UPS battery can be re-charged quickly and thus
be primed and ready for repeat outages or power pulses. A
more compact battery can also be deployed with or immediately adjacent to critical equipment. Rather than relying on
large power runs from a centralized battery room, back-up
power can be located directly in the rack. This lessens the
risk associated with power transmission and reduces sheer
square footage of a facility that must be monitored and
neutralized.
By extension of cycle-life independence from depth of
discharge, lithium-ion batteries also demonstrate a low
degree of capacity fade. This translates directly into a longer life for the UPS battery.

LITHIUM-ION IN PRACTICE
California serves as a prime location for many data centers
and other critical infrastructure. Pacific Gas and Electric,
the primary utility for the state of California, shows 83

power outages on its live service interruption website as this


article is being written. Eatons annual Blackout Tracker
reports 537 major outages experienced in 2014 equivalent to an outage every 16 hours with an average duration
of 49 minutes. It is not a matter of if a data center will lose
power; it is a matter of when.
In order for a UPS to be a reliable, long-term asset in the
battle against blackouts, it must have a high cycle life and
low capacity fade. Consider an application in which 5 kW
of equipment must be backed up for 90 seconds in outageprone California. A comparison of this hypothetical 125
watt-hour battery can be made using commonly available
battery data. Assuming 80% depth-of-discharge, a typical
absorbent glass mat (AGM) lead acid battery would only
last 0.9 years while a lithium-ion pack would last 3.7 years,
or 300% longer. To prolong the life of the lead-acid battery,
a less aggressive depth of discharge can be used, but will
require the battery to be over-sized by 400% from a capacity standpoint. When translated into the physical realm, the
results are more stark, as seen in Table 1.
Of course, when sizing a UPS, margin for both size and
run-time should be used to ensure proper functionality, but
the underlying principle that lithium-ion batteries offer
reduced risk and increased efficiency in the data center still
holds.

CONCLUSION
When preparing or even updating a data center disaster
recovery plan, careful consideration of back-up power
should be given. Having a UPS to provide power during
short outages or while a genset is brought online will maintain uptime, even during non-ideal circumstances. Lithiumion UPS batteries provide a longer lasting, more reliable
solution that is compact enough to be deployed right where
the equipment requiring back-up-is located, greatly reducing the overall risk from power outage at a facility.

REPRINTS OF THIS ARTICLE are available by contacting


Jill DeVries at devriesj@bnpmedia.com or at 248-244-1726.

Read this article online at


www.missioncriticalmagazine.com/emiliestone

SEPTEMBER/OCTOBER 2015

www.missioncriticalmagazine.com

| 55

Designed for the Internet of Things, todays data center


hardware provides valuable feedback that enables
all-software instrumentation for automation.

very year, IT has to do more manage more


servers, smart devices, apps, and services.
Endusers continually raise the bar, demanding
more performance and better response times.
And while everything about the modern data
center is growing, IT headcounts and operating expense budgets
remain the same or shrink. Data center managers and their teams
must continually find ways to work smarter and reduce the biggest expense items. Since energy costs have risen to the top of
the list, looking for ways to consolidate and reduce power consumption is a good place to start.
Virtualization and cloud technologies have helped, making it
possible for IT to more cost-effectively build and manage more
energy-efficient data centers. However, with all of the attention

By Jeff Klaus
Jeff Klaus is the general manager of Data Center
Manager (DCM) Solutions at Intel Corporation where
he has managed various groups for more than 13
years. Klauss team is pioneering power- and thermalmanagement middleware, which is sold through an
ecosystem of data center infrastructure management
(DCIM) software companies and OEMs. A graduate of
Boston College, Klaus also holds an MBA from Boston University. He
can be reached at Jeffrey.S.Klaus@intel.com.

56 |

Mission Critical

SEPTEMBER/OCTOBER 2015

focused on the high-level data center models, it can be easy to


overlook some of the significant advancements at the hardware
level. The latest servers, storage devices, switches, racks, power
distribution units, cooling equipment, air handlers, and the
myriad of other components are all feeding status information
onto the network. Servers, in particular, have also evolved to
give remote IT teams many new, fine-grained monitoring and
control options.

ACTIONABLE INTELLIGENCE
At first glance, the built-in network-based monitoring and control functions are interesting, but IT teams certainly cant afford
to be continually querying and adjusting individual devices.
Manually collecting enough data points to identify patterns
and trends would be even more impractical. Solution providers have consequently evolved system consoles and data center
dashboards to take advantage of middleware technology that
automates the collection, aggregation, reporting, and logging of
a broad range of device status information.
Many of the worlds largest data centers now take advantage
of this type of all-software data center instrumentation. Highly
automated IT practices include monitoring real-time server inlet
temperatures and power consumption data from rack/blade servers, PDUs, and UPSs. Airflow is also monitored.
Best-in-class holistic energy management solutions consume
this information and turn it into energy and thermal maps of indi-

vidual server rooms and data centers. Combined with control


capabilities such as power capping and dynamic server frequency adjustments, the aggregated intelligence is helping
data center and facilities teams better understand and manage
energy costs, and make better decisions.
For example, the real-time power data improves capacity planning. In the past, IT had to rely on the manufacturers specifications for peak power, estimate a de-rated
power specification, or carry out bench tests with simulated
workloads. With logged power consumption data gathered
from production servers, IT can now more confidently and
aggressively provision new servers and racks. Power capping
capabilities, another feature available with todays modern
servers, can be applied to make sure that the more densely
populated racks do not exceed the maximums. Without the
risk of subjecting equipment to damaging power spikes,
the higher rack densities ultimately reduce the number of
required racks as well as floor space and cooling.
Besides more accurate capacity planning, making use of
the available power, temperature, and airflow status supports improved:
Better data center operations. Besides identifying hot
spots, thermal maps can point out overly cooled aisles. IT
and facilities teams can also take advantage of increased
visibility of power and temperature behaviors to adjust
the ambient temperature in server rooms. Since a single
degree increase translates to significant annual savings in
term of cooling costs, many data centers are embracing
hotter room levels, especially since modern servers and
data center equipment is rated for higher temperatures.
Asset utilization/consolidation. Power consumption patterns can highlight ghost servers or those servers that are
idle or under-utilized. Since an idle server draws approximately 50% of its maximum specified power requirements, being able to evaluate server utilization patterns
can lead to major savings. IT teams can consolidate servers or introduce on-the-fly adjustments to put idle servers
into more power-conserving sleep modes.
Workload scheduling. Job scheduling, even within
highly virtualized environments, can be carried out while
considering the impacts on overall power consumption.
Power-aware virtual machine migration and job assignments support more energy-efficient operations, and raise
awareness of the energy costs associated with individual
tasks or organizations workloads.
Equipment lifespan optimization. Real-time thermal
maps vividly highlight hot spots, and put IT in a proactive position for avoiding any damage to the most heavily
loaded and mission critical servers. The same power capping features that help protect densely populated racks can
also mitigate thermal issues that would otherwise damage

or shorten the life of servers and other data center equipment. Alternatively, IT and facilities can adjust cooling
and airflow systems to address and eliminate the hot spots
or shift workloads to avoid them.
Business continuity. Armed with a better understanding
of the actual power requirements associated with various
services, systems, or groups of users, IT can adjust disaster recovery plans to more intelligently allocate back-up
resources or shift workloads during outages. More intelligently allocating power can extend the life of back-up
power supplies by up to 25% based on actual experiences
reported by many data center operators.
SLA management. IT can establish power policies that
guarantee the optimal execution of the high-priority services. Automatic threshold management can flag when
systems, racks, or rows are approaching limits, giving IT
the ability to proactively adjust resources before limits
impact service levels.
Avoidance of peak-period utility rates. Many large companies distribute data centers geographically to deliver the
best possible service to each location. With visibility of
the power consumption patterns, IT has the cost-reducing
option of scheduling some workloads remotely to take
advantage of off-peak power rates.

MORE CONTROL LOWER OPEX


These are some of the many ways that data center teams are
applying automation and middleware technology to gain more
agility and control of data center resources. IT and facilities
are better able to adjust and allocate data center assets while
simultaneously reducing the energy costs for the data center.
At a higher level, software instrumentation for power and
temperature monitoring is also being used to adjust power
management policies for groups of servers, racks, rows,
rooms, and entire data centers. The information helps shape
green initiatives and conservation efforts. Some data centers also apply power data to more accurately charge-back
services. Based on published results and surveys, intelligent
energy management solutions are yielding 20% to 40%
reductions in OPEX by eliminating energy waste alone.
Note that the improved agility and cost reductions are not
the result of monitoring alone. As mentioned previously,
the latest generations of data center equipment also make it
possible to remotely adjust key parameters. For example, IT
can dynamically adjust the internal power states and processor operating frequencies of data center servers.
While predominantly employed by IT, the software
instrumentation that feeds power and temperature data into
consoles and dashboards provides a visual, intuitive summary of environmental conditions that also helps facilities
teams. The aggregated information, in fact, enhances collaboration between IT and facilities teams to better align

SEPTEMBER/OCTOBER 2015

www.missioncriticalmagazine.com

| 57

IT Agility: Making Better Use Of Power Monitoring Data


infrastructure and building planning and management.
The combination of monitoring and these types of controls
let IT optimally balance server performance and power, without
noticeably degrading the user experience or service levels. Field
tests have shown that dynamic adjustments can achieve as much
as a 20% reduction in server power consumption.

CONCLUSIONS
Thanks to rapidly rising global data consumption in our highly
connected world, data center energy consumption is also on the
rise. NRDC reported that 10% of global energy use (91BKWH)
is now attributed to global data centers. Power and cooling costs
have become the biggest component of data center operating
budgets. Gaining more visibility of the actual power and thermal
patterns in the data center should therefore be considered a priority goal for any data center.
Fortunately, IT teams can introduce highly automated monitoring and control solutions to aggregate and apply real-time
data that already exists throughout the data center. The significant savings in terms of avoiding wasted energy and extending
the life of equipment offer a strong business case for investments
in software solutions and speed deployment and start-up times.
Best-in-class solutions, in fact, offer agentless monitoring and

control. The easily-integrated software instrumentation minimizes the burden on the IT staff and gives both infrastructure
and facilities personnel the tools they need to more effectively
achieve their goals.
The latest generation of holistic energy management solutions represents a major advancement in data center monitoring
and management systems and dashboards, and the middleware
approach has been proven to deliver the necessary scalability to keep pace with data centers. The open, flexible software
architectures also strengthen alignment with todays flexible,
on-demand service delivery models. Look for more expanded
feature sets and deployment options as hardware vendors and
systems integrators take advantage of the continuing evolution
of intelligent data center hardware and interconnect standards.

REPRINTS OF THIS ARTICLE are available by contacting Jill


DeVries at devriesj@bnpmedia.com or at 248-244-1726.

Read this article online at


www.missioncriticalmagazine.com/jeffklaus

Got Events?

CyTime

TM

Sequence of Events Recorder

Know what happened


and when... to 1 millisecond.
High-speed I/O
Listed

Powerful

Up to 8192 events1 ms date/time-stamp

Flexible

Time sync via IRIG-B, DCF77, NTP and more

Easy System Integration

Ethernet, Modbus TCP and web technologies

For more information on SER applications, please visit:

cyber-sciences.com/SER
Power Reliability and Energy Efficiency. Enabled.SM

58 |

Mission Critical

SEPTEMBER/OCTOBER 2015

UV is an important addition to the data center cooling


equation.

xtracting heat from server rooms/warehouses


has only grown in importance with todays
high-performing, temperature-sensitive equipment and energy.
Yet, building owners, operators, and engineers
must cope with lower or flat maintenance budgets, so keeping
equipment energy efficient, sustainable, and operating at original design levels can be a challenge. Because cooling systems
represent the most expensive parts of a data center facility to
both construct and to operate, even the smallest improvement in
energy efficiency can translate to sizeable savings.
It is these very incremental efficiencies that offer an untapped
savings opportunity for managers and operators of server rooms
and warehouses.
Given the well-documented and often growing cooling

By Forrest Fencl
Forrest B. Fencl, CEO and co-founder of UV Resources,
passed away on Aug. 1, 2015. A lifelong inventor and
respected industry leader, Fencl pioneered the modern
application of ultraviolet germicidal irradiation (UV-C)
in HVACR equipment, writing or co-writing 17 patents
and several ASHRAE Handbook chapters related to
ultraviolet air and surface treatment. Please direct
questions to UV Resources president Dan Jones, dan.
jones@uvresources.com.

60 |

Mission Critical

SEPTEMBER/OCTOBER 2015

demands by data centers, HVAC equipment must operate at its


original design capacity. However, as air-conditioning / refrigeration equipment ages, its ability to maintain temperatures and
humidity levels decline. Most often, the culprit is reduced coil
heat-transfer effectiveness, or the ability of air-handling units
(AHU) cooling coils to remove heat from the air.

RESTORING COOLING CAPACITY


These inefficient heat-transfer rates derive primarily by the
buildup of organic contaminants on, and through, the coils fin
areas. Such buildups are eliminated through the use of ultraviolet
germicidal irradiation or light in the UV-C wavelength (254 nm).
UV-C works by disassociating molecular bonds, which in turn
disinfects and disintegrates organic materials.
UV-C lighting systems are not an exotic, new technology. It
has been used extensively since the mid-1990s to significantly
improve HVAC airflow and heat-exchange efficiency, which
can reduce energy use by up to 35%. UV-C by itself doesnt
save energy; rather, it restores cooling capacity and airflow to
increase the potential for energy savings.
In new/OEM equipment, UV-C keeps cooling coil surfaces,
drain pans, air filters, and ducts free from organic buildup for
the purpose of maintaining as-built cooling capacity, airflow
conditions, and IAQ. In retrofit applications, UV-C eradicates
organic matter that has accumulated and grown over time, and
then prevents it from returning.
This trifecta of boosting capacity, saving energy, and lowering

maintenance is driving more than nine-of-every-ten UV-C


installations today.
All of these system-enhancing efficiencies of UV-C technology are discussed in greater depth in the American Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE) 2011 Handbook of HVAC Applications,
Chapter 60.8, which states: UV-C can increase airflow and
heat-transfer coefficient and reduce both fan and refrigeration system energy use. In other words, UV-C helps to
restore, and thereafter maintain, original cooling capacity.

SERVERS NEED FOR CONSTANT


COOLING
Servers generally require an airflow volume of about 160 cubic
feet per minute (CFM), while blade servers consume about 120
cfm of air between 66 and 77F per kilowatt (kW).
In 2004, ASHRAE recommended a temperature upper
limit of 77F for data centers. In 2008, the association
raised the upper limit to 80.6F and this limit remained in
the 2011 recommendations, which may provide some operators a hedge against lost cooling capacity.
Despite, this industry benchmark though, most data
center operators believe that higher temperatures lead to
equipment failures. Therefore, half of all data center managers strive toward the temperature goal of 71 to 75F; with
more conservative colleagues (37%) aiming for the 65 to
70F range.
Modifying return air ductwork to capture hot aisle air
separately allows return air temperatures to increase, which
provides for greater temperature differentials at the cooling
coil. So long as coil heat exchange efficiency is maintained
to original design specifications, this would allow the cooling coil to operate more efficiently. Liebert Corporation
states that capturing hot aisle air in this manner can provide
up to a 25% increase in equipment capacity and an increase
of 30% in cooling system efficiency.

ROAD TO SUSTAINABLE
COOLING
However; in regards to sustainability, these performancebased cooling systems should be of concern for the operator.
The main reason relates to the preservation of system cooling
coil heat exchange efficiency, or more specifically, capacity.
Data center cooling designs vary, but the common reality is
increasing heat loads, decreasing capacity from fouled coils
and therefore, challenging cooling demands overall.
In some of the designs, variable speed chillers, pumps,
and fans along with EC motors, etc., have allowed facilities to initially meet cooling loads in the most cost-effective way. Aiding this equipment are digital controls aimed
towards saving energy in, often over-designed, cooling
equipment.
Energy efficiency goals among some users is to achieve

a rating as close to 1 as possible. This is thought to be


accomplished by reducing the energy consumption of cooling equipment. This may be possible in that new facility, but
very difficult to obtain from older equipment. Also, intelligent building equipment mentioned above could be masking
ongoing losses in heat exchange efficiency (capacity) such,
that the +15% surplus of original system capacity may have
slowly eroded through lost coil heat exchange efficiency
and airflow from coil fouling alone, as seen below.
Energy savings from cooling equipment might not be
possible as minor increases in air eaving wetbulb temperatures from a fouled coil can have dramatic effects on system
capacity and thus energy use. For operators who have the
instrumentation, or access to outside test and balance services, air conditioning unit capacity can be demonstrated
through simple measurements using the ASHRAE equation:
BTU = CFM x 4.5 x (h1 - h 2)
Where
h1 - h 2 are wetbulb temperatures in BTU per degree
A velocity traverse at the coil is used to establish CFM,
and the coil upstream and downstream wetbulb temperatures (h1 - h 2) are used to populate the above algorithm.
Another important measurement is the pressure drop
across the coil. Even a small amount of coil fouling will
cause the coils pressure drop to increase, which will cause
the system airflow to decrease, or be digitally compensated
for in fan speed (see below). In the equation above we calculate system airflow in CFM, which plays a major role in
determining system capacity. In other words, coil fouling
will reduce both the systems heat removal capability and
airflow.
The effects can be seen in this example: a new 20,000
cfm system with an air entering wetbulb temperature of 64
and an air leaving temperature of 53, would be: 20,000 x
4.5 x (h1 = 64 or 29.31 BTU h2 = 53 or 22.02 BTU) =
656,100 BTU of cooling. If the current air leaving wetbulb
temperature is only one degree higher, it would look like
this: 20,000 x 4.5 x (29.31 22.62) = 602,100, or a drop
of (65,6100 60,2100) = 54,000 BTU in lost capacity or
(54/12) = 4.5 tons of lost cooling capacity.
When a slight reduction in airflow is added in, it would
look like this: 19,000 x 4.5 x (29.31 22.62) = 571,995, or
a drop of (65,6100 57,1995) = 84,105 BTU in lost capacity or (84/12) = 7.0 tons of lost cooling capacity or a total
reduction of 13% in capacity from some seemingly minor
changes in performance. Its not uncommon to find airflows
reduced by 25% and air leaving temperatures increased by 3
degrees. Again, eroded surplus capacity might not be apparent, which warrants the taking of measurements to be sure.
Potential energy savings may already be gone.

SEPTEMBER/OCTOBER 2015

www.missioncriticalmagazine.com

| 61

Ultraviolet Energy And The Data Center


When capacity is lost (temperatures not being made), fan
speed is often increased digitally, manually, or mechanically to help compensate for the loss. However, fan horsepower
(energy use) increases to the cube of rpm:
HP2
HP1

RPM23
RPM1

This demonstrates that the simple fix of increasing the


fans speed consumes more energy than most of us realize. It
may temporarily satisfy capacity losses; however, when that
isnt the case, further modifications are usually performed.
In chilled water systems, the waters temperature, which
may also be automatically controlled, is lowered, increasing
the temperature differential between the air and the coil surfaces, thereby increasing the heat transfer rate. Often, this is
enough to return system capacity to when the coil was clean,
but at a noteworthy cost. The lowering of the water temperature requires a significant increase in energy use, and it is often
accompanied by pumping additional water volume. Increasing
pump rpm has the same consequences as increasing fan rpm,
a boost in horsepower to the cube of the increase. All of the
above makes a case for obtaining and keeping a coil perfectly
clean so that the original design advantages can be captured
and maintained for the life of the system.
For direct-expansion (DX) systems, typically, run times
are increased, and those DX machines equipped with variable drives could consume additional energy as fans speed
up to compensate for the increased pressure drop across the
coil. As temperature differentials across the evaporator coil
lower, temperature differentials across the condenser coil will
decrease as well, yielding an overall loss in cooling capacity.
There may be other issues as well, such as head temperatures
and pressures at the compressor. The key again is to obtain
and keep the evaporator coil as clean as possible, which will
restore the unit to near as-built conditions, or more important,
capacity and therefore, potential energy savings.
In the future, both energy and water are going to cost
more, and the 2012 International Green Construction Code
(IGcc) focuses heavily on measures that reduce water consumption, too. Condensate from cooling equipment will be
required to be collected and reused. Of interest, when UV-C
is used, drain pan water is disinfected and free of agglomerated organic material meaning that it can be easily reused,
often without further treatment. When municipalities adopt
green building codes, the focus will be on business consumption of water, and will most likely include data centers.
Again, UV-C serves to restore the coils performance
to regain system capacity. And, as system capacity
increases, the energy that had been wasted to compensate
for lost capacity is returned in the form of lower power
consumption.

62 |

Mission Critical

SEPTEMBER/OCTOBER 2015

COIL CLEANING
Equipment manufacturers usually recommend coil cleaning twice
a year, and no less than annually to not only prevent mold growth
and capacity loss, but to keep contaminants from compacting
deep within the coil. With staffs and budgets shrinking, however,
time and money for in-house or contracted coil cleaning is becoming scarce. In fact, some building operators report they have not
cleaned their AHUs coils in three or four years. If coil cleaning is
not performed regularly, contaminant buildup deep inside internal
surfaces can become so difficult to remove that expensive coil
replacement becomes the only option. UV-C has been shown to
clean even compacted contaminant from a coil.

COSTS AND PAYBACK


Users report that UV-C installations are very cost effective.
Most see paybacks in less than six months on energy use alone.
Many users report that their cost for an installed system featuring
high output lamps was about $0.10 per cfm (U.S.). For a 10,000
cfm system, this amounts to an investment of about $1,000.
Field reports indicate that the first-cost of a UV-C system
(initial investment) is approximately the same (or less) as a
single, properly performed coil-cleaning procedure, especially when system shutdowns, off-hours work, associated
overtime, and/or contractor labor costs are considered.
The operating cost for a system that is on year-round
(24/7/365) is far less than 1% of the power to operate that air
conditioning system. This amount is a noteworthy bargain in
those systems that return 5% or more of their capacity.
UV-C light is an amazingly effective and affordable technology for keeping critical components of commercial HVAC
systems clean and operating to as-built specifications. The
benefits of UV-C energy can often sound a little too good to
be true. But, with tens of thousands of installs and backing
by ASHRAE, it becomes a benefit too good to miss out on.

REFERENCES

1. 2011 ASHRAE HandbookHVAC Applications. ASHRAE.


Chapter 60: Ultraviolet Air and Surface Treatment.
https://www.ashrae.org/resources--publications/Table-ofContents-2011-ASHRAE-Handbook-HVAC-Applications
2. Uptime Institutes 2013 Data Center Industry Survey
http://c.ymcdn.com/sites/www.data-central.org/resource/
collection/BC649AE0-4223-4EDE-92C7-29A659EF0900/
uptime-institute-2013-data-center-survey.pdf

REPRINTS OF THIS ARTICLE are available by contacting


Jill DeVries at devriesj@bnpmedia.com or at 248-244-1726.

Read this article online at


www.missioncriticalmagazine.com/forrestfencl

NEWS
CALENDAR
Industry Events

q
100% renewable
Equinix, Inc. has announced that it has signed a power
purchase agreement (PPA) for 105 megawatts (MWAC) of new
solar power with SunEdison, Inc., the largest global renewable
energy development company. This purchase will cover all of
Equinixs California data centers, including 11 facilities located
in the Los Angeles and Silicon Valley metro areas, as well as its
Redwood City, CA, global headquarters. With this deployment
Equinixs data center footprint will increase its use of clean,
renewable sources from 30% to 43% globally.
The project, known as the Mount Signal Solar II project,
will be located in the territory of San Diego Gas & Electric
near Calexico, CA, and just north of the United States-Mexico
border. Construction of the 150 MWAC total capacity solar
farm is expected to begin in 2015 and achieve commercial
operation in the second half of 2016.
The Mount Signal Solar project is another demonstration
of SunEdisons ability to deliver cost effective renewable
energy. Smart companies like Equinix know they can rely on
SunEdison to help them save money, meet their sustainability
goals, and create valuable jobs in the local community, said
Paul Gaynor, executive vice president, SunEdison EMEA and
Americas.
The Mount Signal Solar II project is expected to generate
300,000 MWh per year to offset Equinixs electrical
consumption. The project will effectively reduce Equinixs
carbon footprint by over 180 million lbs of CO2 annually
the equivalent of taking 18,000 passenger cars off of U.S.
roads each year.1 Equinix will also receive Green-e certied
renewable energy certicates from SunEdison to bridge the
approximately 12 months from contract execution to project
completion.
Equinixs purpose is to power the digital economy and we
believe that it is important to do this in an environmentally
sustainable way. This PPA is a major milestone in achieving
our long-term goal of reaching 100% renewable power and
it solidies Equinixs position as a global data center leader
in sustainability, said Karl Strohmeyer, president, Equinix
Americas.
Earlier this year Equinix announced its commitment to 100%
clean and renewable energy for its entire global footprint of
105 data centers located in 33 markets. Through ongoing
development of partnerships Equinix continues to deploy
innovative new technologies to make this commitment a
reality. Most recently it deployed a 342 kWp PV solar system
at its SG3 International Business Exchange (IBX) data
center in Singapore and is in the process of installing a 1MW
fuel cell system, fueled by bio-gas, at its Silicon Valley (SV5)
data center.

NOVEMBER

The Fourth Annual Southwest Data Center Summit,


November, 9, 2015, Sheraton Phoenix.
http://cre-events.com/southwestdc2015/
IMNs Data Center & Cloud Services Infrastructure
Forum, November 12, 2015, Ritz-Carlton Half Moon
Bay, Half Moon Bay, CA.
http://www.imn.org/dcwest15
7x24 Fall 2015 Conference: Commitment to
Excellence, November 15-18, 2015, JW Marriott San
Antonio Hill Country, San Antonio, TX.
http://conferences.7x24exchange.org/fall15/

DECEMBER

12/7/15
Gartner Data Center, Infrastructure & Operations
Management Conference 2015, December 7-10,
2015, The Venetian Hotel Resort & Casino, Las
Vegas.
http://gtnr.it/1ji010n

Equinix also announced that it has signed on to the World


Resources Institute (WRI) and World Wildlife Fund (WWF)
Corporate Renewable Energy Buyers Principles, which are
used to advocate for easier access to, cost competitiveness
of, and increased grid use of renewable energy sources.
Additionally, Equinix has joined the Rocky Mountain Institutes
Business Renewables Center (BRC), which is a collaborative
platform aimed at accelerating corporate renewable energy
procurement. SunEdison is a founding project developer
member of the BRC.
Equinix is demonstrating its sustainability leadership in
the fast growing, power intensive, data center industry. It is
exciting to have Equinix as a member of the BRC, working
with SunEdison, one of our founding members, to reach
impressive renewable targets in record time. I am also
pleased that, as a member of our BRC community, Equinix is
ready to take steps to share some of its experience with other
corporate buyers to accelerate the deployment of renewable
energy. The shared goal of our community is to add another
60 GW of wind and solar energy by 2030, and we need more
companies like Equinix to blaze the trail, said Herve Touati,
managing director, Rocky Mountain Institute.
1. Calculated by using U.S. EPA eGRID 2014 v1 regional
emissions factors and Global Warming Potentials from the
IPCC 4th Assessment and the U.S. EPA GHG Equivalencies
Calculator.

SEPTEMBER/OCTOBER 2015

www.missioncriticalmagazine.com

| 63

PRODUCTS
Ethernet Switches from D-Link

Fiber Cables from Tripp Lite

D-Links latest addition to its EasySmart line of affordable Gigabit


Ethernet smart switches, the DGS-1100-24P twenty-four port PoE+
switch, is enclosed in a rugged metal housing and easily installable
in a standard 19 in. rack. The DGS-1100-24P is a robust, reliable,
and built-to-last PoE switch suitable for all types of networks.
With Power-over-Ethernet capabilities on the first 12 ports,
the DGS-1100-24P simplifies deployments with IP cameras, VoIP
phones, wireless access points, and other standards-compliant
powered devices. The DGS-1100-24P complies with the IEEE 802.3
at PoE+ standard (up to 30W) and has a 100W power budget,
which can be allocated across the 12 PoE ports in any manner
necessary.
Managed via an intuitive, browser-based Graphical User Interface
(GUI), all D-Link EasySmart switches support essential L2 switching
features such as a 802.1Q VLAN, QoS, bandwidth limiting, Link
Aggregation, and IGMP Snooping. The EasySmart management
interface also provides administrators with a convenient way to
monitor and control PoE functions, including displaying realtime PoE power usage
and remotely rebooting IP
cameras or access points from
anywhere on the network,
including over the Internet.

Tripp Lite has expanded its line of premium fiber cables for
high-density data center applications. Multiple series of cables
have been added, including 40GbE MTP / MPO 12-Fiber,
100GbE MTP / MPO 24-Fiber and 40GbE MTP / MPO Fan-Out
cables. All models feature convenient push-pull tab connectors
that can be installed or removed with one hand; no tools
needed. A space-saving design and quality construction make
these cables ideal for LANs, SANs (fibre channel), high-speed
parallel interconnects for head-ends, telecommunications
rooms, and data centers.
Features include maximum accessibility; a slim uniboot design
saves space and makes cable easier to manage; premiumgrade ceramic ferrule designed specifically
for high-demand applications; backwardcompatibility with existing 50/125 fiber;
and multiple lengths availability.

us.dlink.com

www.tripplite.com

Application Stack Monitoring from


SolarWinds
The company has introduced the new Storage Resource Monitor
(SRM) and released significant updates to Server & Application
Monitor (SAM), Virtualization Manager and Web Performance
Monitor (WPM), including greater integration with SolarWinds
Orion technology backbone and the new SolarWinds
AppStack dashboard.
The new Storage Resource Monitor is designed to provide IT
with the necessary insight into storage resources and the potential
performance impact on virtual environments to ultimately ensure
business-critical application performance. Storage Resource
Monitor provides support for dozens of common arrays and
features new NetApp Cluster-mode, as well as NetApp IBM
N-series, NetApp E-series family, EMC VNX family, and Dell
EqualLogic PS Series arrays, to name a few. With real-time
visibility into heterogeneous SAN and NAS arrays and integration
with the Orion technology and AppStack dashboard, an IT
pro is able to view and manage an
IT environment from array to
application.

www.solarwinds.com

64 |

Mission Critical

UPS from Emerson Network Power


Emerson Network Power has introduced the Liebert NXL
400kVA, 575-600V on-line, maximum protection uninterruptible
power supply (UPS) system. The new system extends native
575-600V capabilities to power applications that require a high
degree of resiliency. This UL listed UPS is now available in the
U.S. and Canada to meet the needs of 575-600V applications.
This UPS expands the 575-600V applications in the Liebert
NXL product family from 250-1100kVA. The Liebert NXL UPS is
engineered to enable the highest levels of application availability.
Its design allows operation at 100% load under a stack up
of conditions that would require other systems to de-rate
their output or compromise system availability. Simultaneous
conditions such as clogged air filters, high ambient temperature,
high altitude, fan failure, and low or high line conditions have
been mitigated to ensure full rating at 100% operating loads.
Like other Liebert NXL UPS models, it is available in single
module and multi-module systems (1+N) to achieve redundancy
and maximum reliability.

www.emersonnetworkpower.com

SEPTEMBER/OCTOBER 2015

HEARD ON THE INTERNET


From Our Website
September 25, 2015

TOP KEYWORDS:

September 24, 2015


How Indirect Evaporative Data Center Cooling Can
Save You Money in Less than 3 Minutes (Video)
Stulz blog
Author: Aaron Sabino
Indirect evaporative cooling utilizes an air-to-air heat
exchanger to meet the needs of modern IT cooling
without introducing contaminants from the outside air. By
utilizing this method of cooling, it is possible to achieve
savings of up to 75% over traditional mechanical cooling
methods.
If youre wondering how indirect evaporative cooling
provides greater efficiency in the data center, even in less
than ideal conditions, youve come to the right place.

September 22, 2015


The Future is Now, Kind of
Compass Points
Author: Chris Crosby
When we were kids, our visions of the future included
an array of technological wonders like flying cars, people
living on the moon and my personal favorite, being able
to use my watch as a communications device ala Dick
Tracy. Naturally, some of the things we imagined are
still imaginary the world still awaits its first flying car
dealership but some of our more esoteric visions of
the future have become reality, wearable technology
being a prime example. Today it is possible to make a
call from your wristwatch, but I think one of the lessons
that weve learned over the []

1. critical
2 mission
3 mitsubishi
4. summit
5. ups
6. power
7. magazine
8. data
9. center
10. missioncritical

MOST VIEWED
1. The Second Annual Texas Data Center Summit
Set For October 22
2. Schneider Electric Recognized For DCIM
3. Cosentry Expands Midwestern Data Center
Operations
4. Mitsubishi Electric Introduces New SUMMIT Series
UPS
5. Modular Data Centers from Cannon T4 Inc.
6. Ensite Solutions Named One Of The Top 500
Fastest Growing Companies In The U.S.
7. MOD Mission Criticals Colo By The U Joins 111
TSPs Intelligent Cloud Ecosystem
8. Pulling Back The Curtain On TCO Calculations
9. Rack Handles from Piton Engineering
10. Schneider Electric Launches New Brand Strategy

MOST DOWNLOADED

September 17, 2015


Do Your Workers Have What It Takes?
Emerson Network Power
Author: Wally Vahlstrom
Its no secret that working around electricity can be
dangerous. In fact, in the last 10 years, the U.S. Bureau
of Labor Statistics reports 2,000 fatal and more than
24,000 non-fatal electrical injuries.
To help better protect workers from the devastating
consequences of electrocution and arc flash, the
National Fire Protection Association (NFPA) develops
and regularly updates its guidelines for creating a
safe electrical work environment. The most recent
versions of NFPA 70E: Standard for Electrical Safety
in the Workplace outline increasingly stringent
qualifications for working on or around electrical
equipment, and they include more robust training
requirements for electrical workers.

1. STULZ Data Center Design Guide


2. Cooling Capacity Factor (CCF) Reveals Stranded
Capacity and Data Center Cost Savings
3. Maintaining Data Center Performance
4. Airflow Impendance
5. Understanding Joint Commission Standards
6. Codes and Standards for Onsite Power
Systems
7. Cost Effective Alternative To CRAH

September 25, 2015


@SDN_RR
Dell Expands Networking Portfolio for Campus, Data
Center
http://t.co/7NRUWkDlP6 http://t.co/vrJfGakovR

SEPTEMBER/OCTOBER 2015

www.missioncriticalmagazine.com

| 65

Webinars: Free Education Online


Mission Criticals
webinars are an easy, convenient
way to interact with industry experts and
learn about the latest industry topics.
View all of our webinars at
webinars.missioncriticalmagazine.com

Sponsorships available, contact your sales representative for details

ADVERTISERS INDEX
To receive free information about products and services mentioned in Mission Critical, visit www.missioncriticalmagazine.com/instantproductinfo
and simply enter the info number from this ad index on the convenient form. Or use the Free Information Card on the opposite page.

7x24 XChange
www.7x24exchange.org
Page 59
Altronic GTI
www.gti-altronic.com
Page 39
AMCO Enclosures
www.amcoenclosures.com
Page 53
ASCO
www.ascoapu.com
800-800-ASCO
Page BC
BOON EDAM
www.boonedam.com
Page 45
CableSys
www.cablesys.com
800-555-7176
Page 19
Caterpillar
www.cat.com
Page 33

66 |

Mission Critical

ComRent
www.comrent.com
888-881-7118
Page 13

Methode
www.methode-activeenergy.com
844-869-3115
Page 3

Copper Development
Association Inc.
www.copper.org
Page 17

Miratech
www.miratechcorp.com
800-640-3141
Page 15

Cyber Sciences
www.cyber-sciences.com
615-890-6709
Page 58

Mitsubishi Electric Power


Products, Inc.
www.meppi.com
724-779-1664
Pg IFC

Data Aire, Inc.


www.dataaire.com
Page 35
ebm-papst Inc.
www.ebmpapst.us
Page 7
Geist Power
www.geistglobal.com
Page 21, 23, 25

SEPTEMBER/OCTOBER 2015

Movincool
www.movincool.com
800-264-9573
Page 37
Russelectric
www.russelectric.com
800-225-5250
Page 27
Tate
www.tateairflow.com
Page 47

Universal Electric
www.StarlinePower.com
Page 9
Upsite Technologies
www.upsite.com
Page 29
Xcel Energy
ResponsibleByNature.com/
Business
Page 31

Mission

CRITICAL
Data center and emergency backup solutions

WEBINAR
Saving money with
smart data center design
WHAT: FREE webinar
WHERE: Right from your computer, register at
http://webinars.missioncriticalmagazine.com
WHEN: NOVEMBER 18, 2015; 2 PM ET
DURATION: 60 minutes, including LIVE Q&A
Continuing Education: 1.0 PDH Certificate
of Completion*
By using a UPS catcher system, data centers can get great
levels of redundancy without the costs that come with a
2N system. A traditional catcher systeman unloaded UPS
in reserve that feeds the bypass of primary UPSsusually
comes in a fixed ratio of primary UPSs to catcher UPS not
exceeding 3:1. But, theres a potential for stranded redundancy
if intelligent operation isnt incorporated. In response to this
weakness, consider a new smart catcher design.

Sponsored by:

In this webinar, we will discuss:


What catcher systems are and how they work
Saving money with new smart design options
How scalability and expansion fit into the design
Operational benefits

*It is up to each individual to verify whether the course is


approved by his or her state licensing board

EXPERT SPEAKERS:
Ed Spears
Product Marketing Manager
Critical Power Solutions
Division

John Collins
Product Line Manager
Large Data Center
Solutions

Eaton

Eaton

REGISTER AT: http://webinars.missioncriticalmagazine.com

E
E
R
F

ur
o
y iew
e
v
r dv
e
s
Re ot anday!
sp to

The union of ASCO, Avtron and Froment.

The global leader in load banks.


Weve put all the pieces together.
The proven, market-leading load bank technologies of
"WUSPOBOE'SPNFOUBSFBQFSGFDUUXJUI"4$01PXFS
5FDIOPMPHJFT$PNCJOJOHXPSMEDMBTT
innovation and more than 200 years
UPUBMFYQFSJFODF "4$0JTZPVSPOFTUPQ
QBSUOFSUIBUPGGFSTDPNQMFUFTPMVUJPOT
UIBUZPVDBOSFMZPOUPTPMWFBOZQPXFS
UFTUJOHSFRVJSFNFOU

4JHNBCSJOHTDPTUFGGFDUJWFTPMVUJPOTUPUPEBZTQPXFSUFTUJOH
SFRVJSFNFOUTXIJDIDBOSFRVJSFIJHIMFWFMJOTUSVNFOUBUJPO 
EBUBDBQUVSFBOEWFSJDBUJPOXJUIUIFBCJMJUZUP
MJOLNVMUJQMFMPBECBOLTPGEJGGFSJOHDBQB
cities or combination and controlled from
POFIBOEIFMEUFSNJOBMPS1$

Broadest Portfolio
/PDPNQBOZJOUIFXPSMEDBONBUDIUIFEFQUIBOECSFBEUI
PGPVSQPSUGPMJP'SPNTJNQMFL8QPSUBCMFMPBECBOLT
UPNVMUJQMF.7" XFDBOQSPWJEFBTPMVUJPOGPSWJSUVBMMZBOZ
BQQMJDBUJPO8FSFWPMWFBSPVOEZPVSOFFET XJUIUIF
FYQFSUJTFBOEUFDIOJDBMLOPXIPXUPBTTFNCMFDVTUPN
TPMVUJPOTUIBUQSPWJEFMFBEJOHQPXFSUFTUTPMVUJPOT

Technology

Experience

*OOPWBUJPOJTBUPVSDPSF DPNQMFNFOUFECZPVSDPNNJUNFOUUPCVJMEMPBECBOLTUPUIFIJHIFTUTUBOEBSET
*40 6-$6- $4" $& *&$ /'1"5FDIOJDBMMFBEFSTIJQ
JODMVEFT4JHNBDPOUSPMXIJDIJTTFDUPSMFBEJOHJOTJNQMJDJUZ 
FBTFPGVTF BOEBDDVSBDZ

Ninety years combined experience in load banks is only


NBUDIFECZUIFZFBST"4$0IBTCFFOQSPWJEJOHQPXFS
TPMVUJPOT0VSUFBNPGFYQFSUTIBTQSPWJEFEDPVOUMFTTTUBOEBSE
BOEDVTUPNMPBECBOLTUPUIFJOEVTUSZPWFSUIFZFBST

XXXFNFSTPOOFUXPSLQPXFSDPNMPBECBOL 
"4$0
BTDPBQVDPNDVTUPNFSDBSF!BTDPDPN
&NFSTPOBOE"4$0BSFUSBEFNBSLTPG&NFSTPO&MFDUSJD$PPSPOFPGJUTBGMJBUFEDPNQBOJFT
&NFSTPO&MFDUSJD$P"4$01PXFS5FDIOPMPHJFT

You might also like