You are on page 1of 474

MCT USE ONLY.

STUDENT USE PROHIBITED


O F F I C I A L M I C R O S O F T L E A R N I N G P R O D U C T

20533E
Implementing Microsoft Azure
Infrastructure Solutions
MCT USE ONLY. STUDENT USE PROHIBITED
ii Implementing Microsoft Azure Infrastructure Solutions

Information in this document, including URL and other Internet Web site references, is subject to change
without notice. Unless otherwise noted, the example companies, organizations, products, domain names,
e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with
any real company, organization, product, domain name, e-mail address, logo, person, place or event is
intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the
user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in
or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical,
photocopying, recording, or otherwise), or for any purpose, without the express written permission of
Microsoft Corporation.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property
rights covering subject matter in this document. Except as expressly provided in any written license
agreement from Microsoft, the furnishing of this document does not give you any license to these
patents, trademarks, copyrights, or other intellectual property.

The names of manufacturers, products, or URLs are provided for informational purposes only and
Microsoft makes no representations and warranties, either expressed, implied, or statutory, regarding
these manufacturers or the use of the products with any Microsoft technologies. The inclusion of a
manufacturer or product does not imply endorsement of Microsoft of the manufacturer or product. Links
may be provided to third party sites. Such sites are not under the control of Microsoft and Microsoft is not
responsible for the contents of any linked site or any link contained in a linked site, or any changes or
updates to such sites. Microsoft is not responsible for webcasting or any other form of transmission
received from any linked site. Microsoft is providing these links to you only as a convenience, and the
inclusion of any link does not imply endorsement of Microsoft of the site or the products contained
therein.
© 2018 Microsoft Corporation. All rights reserved.

Microsoft and the trademarks listed at http://www.microsoft.com/trademarks are trademarks of the


Microsoft group of companies. All other trademarks are property of their respective owners.

Product Number: 20533E

Part Number: X21-73677

Released: 06/2018
MCT USE ONLY. STUDENT USE PROHIBITED
MICROSOFT LICENSE TERMS
MICROSOFT INSTRUCTOR-LED COURSEWARE

These license terms are an agreement between Microsoft Corporation (or based on where you live, one of its
affiliates) and you. Please read them. They apply to your use of the content accompanying this agreement which
includes the media on which you received it, if any. These license terms also apply to Trainer Content and any
updates and supplements for the Licensed Content unless other terms accompany those items. If so, those terms
apply.

BY ACCESSING, DOWNLOADING OR USING THE LICENSED CONTENT, YOU ACCEPT THESE TERMS.
IF YOU DO NOT ACCEPT THEM, DO NOT ACCESS, DOWNLOAD OR USE THE LICENSED CONTENT.

If you comply with these license terms, you have the rights below for each license you acquire.

1. DEFINITIONS.

a. “Authorized Learning Center” means a Microsoft IT Academy Program Member, Microsoft Learning
Competency Member, or such other entity as Microsoft may designate from time to time.

b. “Authorized Training Session” means the instructor-led training class using Microsoft Instructor-Led
Courseware conducted by a Trainer at or through an Authorized Learning Center.

c. “Classroom Device” means one (1) dedicated, secure computer that an Authorized Learning Center owns
or controls that is located at an Authorized Learning Center’s training facilities that meets or exceeds the
hardware level specified for the particular Microsoft Instructor-Led Courseware.

d. “End User” means an individual who is (i) duly enrolled in and attending an Authorized Training Session
or Private Training Session, (ii) an employee of a MPN Member, or (iii) a Microsoft full-time employee.

e. “Licensed Content” means the content accompanying this agreement which may include the Microsoft
Instructor-Led Courseware or Trainer Content.

f. “Microsoft Certified Trainer” or “MCT” means an individual who is (i) engaged to teach a training session
to End Users on behalf of an Authorized Learning Center or MPN Member, and (ii) currently certified as a
Microsoft Certified Trainer under the Microsoft Certification Program.

g. “Microsoft Instructor-Led Courseware” means the Microsoft-branded instructor-led training course that
educates IT professionals and developers on Microsoft technologies. A Microsoft Instructor-Led
Courseware title may be branded as MOC, Microsoft Dynamics or Microsoft Business Group courseware.

h. “Microsoft IT Academy Program Member” means an active member of the Microsoft IT Academy
Program.

i. “Microsoft Learning Competency Member” means an active member of the Microsoft Partner Network
program in good standing that currently holds the Learning Competency status.

j. “MOC” means the “Official Microsoft Learning Product” instructor-led courseware known as Microsoft
Official Course that educates IT professionals and developers on Microsoft technologies.

k. “MPN Member” means an active Microsoft Partner Network program member in good standing.
MCT USE ONLY. STUDENT USE PROHIBITED
l. “Personal Device” means one (1) personal computer, device, workstation or other digital electronic device
that you personally own or control that meets or exceeds the hardware level specified for the particular
Microsoft Instructor-Led Courseware.

m. “Private Training Session” means the instructor-led training classes provided by MPN Members for
corporate customers to teach a predefined learning objective using Microsoft Instructor-Led Courseware.
These classes are not advertised or promoted to the general public and class attendance is restricted to
individuals employed by or contracted by the corporate customer.

n. “Trainer” means (i) an academically accredited educator engaged by a Microsoft IT Academy Program
Member to teach an Authorized Training Session, and/or (ii) a MCT.

o. “Trainer Content” means the trainer version of the Microsoft Instructor-Led Courseware and additional
supplemental content designated solely for Trainers’ use to teach a training session using the Microsoft
Instructor-Led Courseware. Trainer Content may include Microsoft PowerPoint presentations, trainer
preparation guide, train the trainer materials, Microsoft One Note packs, classroom setup guide and Pre-
release course feedback form. To clarify, Trainer Content does not include any software, virtual hard
disks or virtual machines.

2. USE RIGHTS. The Licensed Content is licensed not sold. The Licensed Content is licensed on a one copy
per user basis, such that you must acquire a license for each individual that accesses or uses the Licensed
Content.

2.1 Below are five separate sets of use rights. Only one set of rights apply to you.

a. If you are a Microsoft IT Academy Program Member:


i. Each license acquired on behalf of yourself may only be used to review one (1) copy of the Microsoft
Instructor-Led Courseware in the form provided to you. If the Microsoft Instructor-Led Courseware is
in digital format, you may install one (1) copy on up to three (3) Personal Devices. You may not
install the Microsoft Instructor-Led Courseware on a device you do not own or control.
ii. For each license you acquire on behalf of an End User or Trainer, you may either:
1. distribute one (1) hard copy version of the Microsoft Instructor-Led Courseware to one (1) End
User who is enrolled in the Authorized Training Session, and only immediately prior to the
commencement of the Authorized Training Session that is the subject matter of the Microsoft
Instructor-Led Courseware being provided, or
2. provide one (1) End User with the unique redemption code and instructions on how they can
access one (1) digital version of the Microsoft Instructor-Led Courseware, or
3. provide one (1) Trainer with the unique redemption code and instructions on how they can
access one (1) Trainer Content,
provided you comply with the following:
iii. you will only provide access to the Licensed Content to those individuals who have acquired a valid
license to the Licensed Content,
iv. you will ensure each End User attending an Authorized Training Session has their own valid licensed
copy of the Microsoft Instructor-Led Courseware that is the subject of the Authorized Training
Session,
v. you will ensure that each End User provided with the hard-copy version of the Microsoft Instructor-
Led Courseware will be presented with a copy of this agreement and each End User will agree that
their use of the Microsoft Instructor-Led Courseware will be subject to the terms in this agreement
prior to providing them with the Microsoft Instructor-Led Courseware. Each individual will be required
to denote their acceptance of this agreement in a manner that is enforceable under local law prior to
their accessing the Microsoft Instructor-Led Courseware,
vi. you will ensure that each Trainer teaching an Authorized Training Session has their own valid
licensed copy of the Trainer Content that is the subject of the Authorized Training Session,
MCT USE ONLY. STUDENT USE PROHIBITED
vii. you will only use qualified Trainers who have in-depth knowledge of and experience with the
Microsoft technology that is the subject of the Microsoft Instructor-Led Courseware being taught for
all your Authorized Training Sessions,
viii. you will only deliver a maximum of 15 hours of training per week for each Authorized Training
Session that uses a MOC title, and
ix. you acknowledge that Trainers that are not MCTs will not have access to all of the trainer resources
for the Microsoft Instructor-Led Courseware.

b. If you are a Microsoft Learning Competency Member:


i. Each license acquired on behalf of yourself may only be used to review one (1) copy of the Microsoft
Instructor-Led Courseware in the form provided to you. If the Microsoft Instructor-Led Courseware is
in digital format, you may install one (1) copy on up to three (3) Personal Devices. You may not
install the Microsoft Instructor-Led Courseware on a device you do not own or control.
ii. For each license you acquire on behalf of an End User or Trainer, you may either:
1. distribute one (1) hard copy version of the Microsoft Instructor-Led Courseware to one (1) End
User attending the Authorized Training Session and only immediately prior to the
commencement of the Authorized Training Session that is the subject matter of the Microsoft
Instructor-Led Courseware provided, or
2. provide one (1) End User attending the Authorized Training Session with the unique redemption
code and instructions on how they can access one (1) digital version of the Microsoft Instructor-
Led Courseware, or
3. you will provide one (1) Trainer with the unique redemption code and instructions on how they
can access one (1) Trainer Content,
provided you comply with the following:
iii. you will only provide access to the Licensed Content to those individuals who have acquired a valid
license to the Licensed Content,
iv. you will ensure that each End User attending an Authorized Training Session has their own valid
licensed copy of the Microsoft Instructor-Led Courseware that is the subject of the Authorized
Training Session,
v. you will ensure that each End User provided with a hard-copy version of the Microsoft Instructor-Led
Courseware will be presented with a copy of this agreement and each End User will agree that their
use of the Microsoft Instructor-Led Courseware will be subject to the terms in this agreement prior to
providing them with the Microsoft Instructor-Led Courseware. Each individual will be required to
denote their acceptance of this agreement in a manner that is enforceable under local law prior to
their accessing the Microsoft Instructor-Led Courseware,
vi. you will ensure that each Trainer teaching an Authorized Training Session has their own valid
licensed copy of the Trainer Content that is the subject of the Authorized Training Session,
vii. you will only use qualified Trainers who hold the applicable Microsoft Certification credential that is
the subject of the Microsoft Instructor-Led Courseware being taught for your Authorized Training
Sessions,
viii. you will only use qualified MCTs who also hold the applicable Microsoft Certification credential that is
the subject of the MOC title being taught for all your Authorized Training Sessions using MOC,
ix. you will only provide access to the Microsoft Instructor-Led Courseware to End Users, and
x. you will only provide access to the Trainer Content to Trainers.
MCT USE ONLY. STUDENT USE PROHIBITED
c. If you are a MPN Member:
i. Each license acquired on behalf of yourself may only be used to review one (1) copy of the Microsoft
Instructor-Led Courseware in the form provided to you. If the Microsoft Instructor-Led Courseware is
in digital format, you may install one (1) copy on up to three (3) Personal Devices. You may not
install the Microsoft Instructor-Led Courseware on a device you do not own or control.
ii. For each license you acquire on behalf of an End User or Trainer, you may either:
1. distribute one (1) hard copy version of the Microsoft Instructor-Led Courseware to one (1) End
User attending the Private Training Session, and only immediately prior to the commencement
of the Private Training Session that is the subject matter of the Microsoft Instructor-Led
Courseware being provided, or
2. provide one (1) End User who is attending the Private Training Session with the unique
redemption code and instructions on how they can access one (1) digital version of the
Microsoft Instructor-Led Courseware, or
3. you will provide one (1) Trainer who is teaching the Private Training Session with the unique
redemption code and instructions on how they can access one (1) Trainer Content,
provided you comply with the following:
iii. you will only provide access to the Licensed Content to those individuals who have acquired a valid
license to the Licensed Content,
iv. you will ensure that each End User attending an Private Training Session has their own valid licensed
copy of the Microsoft Instructor-Led Courseware that is the subject of the Private Training Session,
v. you will ensure that each End User provided with a hard copy version of the Microsoft Instructor-Led
Courseware will be presented with a copy of this agreement and each End User will agree that their
use of the Microsoft Instructor-Led Courseware will be subject to the terms in this agreement prior to
providing them with the Microsoft Instructor-Led Courseware. Each individual will be required to
denote their acceptance of this agreement in a manner that is enforceable under local law prior to
their accessing the Microsoft Instructor-Led Courseware,
vi. you will ensure that each Trainer teaching an Private Training Session has their own valid licensed
copy of the Trainer Content that is the subject of the Private Training Session,
vii. you will only use qualified Trainers who hold the applicable Microsoft Certification credential that is
the subject of the Microsoft Instructor-Led Courseware being taught for all your Private Training
Sessions,
viii. you will only use qualified MCTs who hold the applicable Microsoft Certification credential that is the
subject of the MOC title being taught for all your Private Training Sessions using MOC,
ix. you will only provide access to the Microsoft Instructor-Led Courseware to End Users, and
x. you will only provide access to the Trainer Content to Trainers.

d. If you are an End User:


For each license you acquire, you may use the Microsoft Instructor-Led Courseware solely for your
personal training use. If the Microsoft Instructor-Led Courseware is in digital format, you may access the
Microsoft Instructor-Led Courseware online using the unique redemption code provided to you by the
training provider and install and use one (1) copy of the Microsoft Instructor-Led Courseware on up to
three (3) Personal Devices. You may also print one (1) copy of the Microsoft Instructor-Led Courseware.
You may not install the Microsoft Instructor-Led Courseware on a device you do not own or control.

e. If you are a Trainer.


i. For each license you acquire, you may install and use one (1) copy of the Trainer Content in the
form provided to you on one (1) Personal Device solely to prepare and deliver an Authorized
Training Session or Private Training Session, and install one (1) additional copy on another Personal
Device as a backup copy, which may be used only to reinstall the Trainer Content. You may not
install or use a copy of the Trainer Content on a device you do not own or control. You may also
print one (1) copy of the Trainer Content solely to prepare for and deliver an Authorized Training
Session or Private Training Session.
MCT USE ONLY. STUDENT USE PROHIBITED
ii. You may customize the written portions of the Trainer Content that are logically associated with
instruction of a training session in accordance with the most recent version of the MCT agreement.
If you elect to exercise the foregoing rights, you agree to comply with the following: (i)
customizations may only be used for teaching Authorized Training Sessions and Private Training
Sessions, and (ii) all customizations will comply with this agreement. For clarity, any use of
“customize” refers only to changing the order of slides and content, and/or not using all the slides or
content, it does not mean changing or modifying any slide or content.

2.2 Separation of Components. The Licensed Content is licensed as a single unit and you may not
separate their components and install them on different devices.

2.3 Redistribution of Licensed Content. Except as expressly provided in the use rights above, you may
not distribute any Licensed Content or any portion thereof (including any permitted modifications) to any
third parties without the express written permission of Microsoft.

2.4 Third Party Notices. The Licensed Content may include third party code tent that Microsoft, not the
third party, licenses to you under this agreement. Notices, if any, for the third party code ntent are included
for your information only.

2.5 Additional Terms. Some Licensed Content may contain components with additional terms,
conditions, and licenses regarding its use. Any non-conflicting terms in those conditions and licenses also
apply to your use of that respective component and supplements the terms described in this agreement.

3. LICENSED CONTENT BASED ON PRE-RELEASE TECHNOLOGY. If the Licensed Content’s subject


matter is based on a pre-release version of Microsoft technology (“Pre-release”), then in addition to the
other provisions in this agreement, these terms also apply:

a. Pre-Release Licensed Content. This Licensed Content subject matter is on the Pre-release version of
the Microsoft technology. The technology may not work the way a final version of the technology will
and we may change the technology for the final version. We also may not release a final version.
Licensed Content based on the final version of the technology may not contain the same information as
the Licensed Content based on the Pre-release version. Microsoft is under no obligation to provide you
with any further content, including any Licensed Content based on the final version of the technology.

b. Feedback. If you agree to give feedback about the Licensed Content to Microsoft, either directly or
through its third party designee, you give to Microsoft without charge, the right to use, share and
commercialize your feedback in any way and for any purpose. You also give to third parties, without
charge, any patent rights needed for their products, technologies and services to use or interface with
any specific parts of a Microsoft technology, Microsoft product, or service that includes the feedback.
You will not give feedback that is subject to a license that requires Microsoft to license its technology,
technologies, or products to third parties because we include your feedback in them. These rights
survive this agreement.

c. Pre-release Term. If you are an Microsoft IT Academy Program Member, Microsoft Learning
Competency Member, MPN Member or Trainer, you will cease using all copies of the Licensed Content on
the Pre-release technology upon (i) the date which Microsoft informs you is the end date for using the
Licensed Content on the Pre-release technology, or (ii) sixty (60) days after the commercial release of the
technology that is the subject of the Licensed Content, whichever is earliest (“Pre-release term”).
Upon expiration or termination of the Pre-release term, you will irretrievably delete and destroy all copies
of the Licensed Content in your possession or under your control.
MCT USE ONLY. STUDENT USE PROHIBITED
4. SCOPE OF LICENSE. The Licensed Content is licensed, not sold. This agreement only gives you some
rights to use the Licensed Content. Microsoft reserves all other rights. Unless applicable law gives you more
rights despite this limitation, you may use the Licensed Content only as expressly permitted in this
agreement. In doing so, you must comply with any technical limitations in the Licensed Content that only
allows you to use it in certain ways. Except as expressly permitted in this agreement, you may not:
• access or allow any individual to access the Licensed Content if they have not acquired a valid license
for the Licensed Content,
• alter, remove or obscure any copyright or other protective notices (including watermarks), branding
or identifications contained in the Licensed Content,
• modify or create a derivative work of any Licensed Content,
• publicly display, or make the Licensed Content available for others to access or use,
• copy, print, install, sell, publish, transmit, lend, adapt, reuse, link to or post, make available or
distribute the Licensed Content to any third party,
• work around any technical limitations in the Licensed Content, or
• reverse engineer, decompile, remove or otherwise thwart any protections or disassemble the
Licensed Content except and only to the extent that applicable law expressly permits, despite this
limitation.

5. RESERVATION OF RIGHTS AND OWNERSHIP. Microsoft reserves all rights not expressly granted to
you in this agreement. The Licensed Content is protected by copyright and other intellectual property laws
and treaties. Microsoft or its suppliers own the title, copyright, and other intellectual property rights in the
Licensed Content.

6. EXPORT RESTRICTIONS. The Licensed Content is subject to United States export laws and regulations.
You must comply with all domestic and international export laws and regulations that apply to the Licensed
Content. These laws include restrictions on destinations, end users and end use. For additional information,
see www.microsoft.com/exporting.

7. SUPPORT SERVICES. Because the Licensed Content is “as is”, we may not provide support services for it.

8. TERMINATION. Without prejudice to any other rights, Microsoft may terminate this agreement if you fail
to comply with the terms and conditions of this agreement. Upon termination of this agreement for any
reason, you will immediately stop all use of and delete and destroy all copies of the Licensed Content in
your possession or under your control.

9. LINKS TO THIRD PARTY SITES. You may link to third party sites through the use of the Licensed
Content. The third party sites are not under the control of Microsoft, and Microsoft is not responsible for
the contents of any third party sites, any links contained in third party sites, or any changes or updates to
third party sites. Microsoft is not responsible for webcasting or any other form of transmission received
from any third party sites. Microsoft is providing these links to third party sites to you only as a
convenience, and the inclusion of any link does not imply an endorsement by Microsoft of the third party
site.

10. ENTIRE AGREEMENT. This agreement, and any additional terms for the Trainer Content, updates and
supplements are the entire agreement for the Licensed Content, updates and supplements.

11. APPLICABLE LAW.


a. United States. If you acquired the Licensed Content in the United States, Washington state law governs
the interpretation of this agreement and applies to claims for breach of it, regardless of conflict of laws
principles. The laws of the state where you live govern all other claims, including claims under state
consumer protection laws, unfair competition laws, and in tort.
MCT USE ONLY. STUDENT USE PROHIBITED
b. Outside the United States. If you acquired the Licensed Content in any other country, the laws of that
country apply.

12. LEGAL EFFECT. This agreement describes certain legal rights. You may have other rights under the laws
of your country. You may also have rights with respect to the party from whom you acquired the Licensed
Content. This agreement does not change your rights under the laws of your country if the laws of your
country do not permit it to do so.

13. DISCLAIMER OF WARRANTY. THE LICENSED CONTENT IS LICENSED "AS-IS" AND "AS
AVAILABLE." YOU BEAR THE RISK OF USING IT. MICROSOFT AND ITS RESPECTIVE
AFFILIATES GIVES NO EXPRESS WARRANTIES, GUARANTEES, OR CONDITIONS. YOU MAY
HAVE ADDITIONAL CONSUMER RIGHTS UNDER YOUR LOCAL LAWS WHICH THIS AGREEMENT
CANNOT CHANGE. TO THE EXTENT PERMITTED UNDER YOUR LOCAL LAWS, MICROSOFT AND
ITS RESPECTIVE AFFILIATES EXCLUDES ANY IMPLIED WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.

14. LIMITATION ON AND EXCLUSION OF REMEDIES AND DAMAGES. YOU CAN RECOVER FROM
MICROSOFT, ITS RESPECTIVE AFFILIATES AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP
TO US$5.00. YOU CANNOT RECOVER ANY OTHER DAMAGES, INCLUDING CONSEQUENTIAL,
LOST PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES.

This limitation applies to


o anything related to the Licensed Content, services, content (including code) on third party Internet
sites or third-party programs; and
o claims for breach of contract, breach of warranty, guarantee or condition, strict liability, negligence,
or other tort to the extent permitted by applicable law.

It also applies even if Microsoft knew or should have known about the possibility of the damages. The
above limitation or exclusion may not apply to you because your country may not allow the exclusion or
limitation of incidental, consequential or other damages.

Please note: As this Licensed Content is distributed in Quebec, Canada, some of the clauses in this
agreement are provided below in French.

Remarque : Ce le contenu sous licence étant distribué au Québec, Canada, certaines des clauses
dans ce contrat sont fournies ci-dessous en français.

EXONÉRATION DE GARANTIE. Le contenu sous licence visé par une licence est offert « tel quel ». Toute
utilisation de ce contenu sous licence est à votre seule risque et péril. Microsoft n’accorde aucune autre garantie
expresse. Vous pouvez bénéficier de droits additionnels en vertu du droit local sur la protection dues
consommateurs, que ce contrat ne peut modifier. La ou elles sont permises par le droit locale, les garanties
implicites de qualité marchande, d’adéquation à un usage particulier et d’absence de contrefaçon sont exclues.

LIMITATION DES DOMMAGES-INTÉRÊTS ET EXCLUSION DE RESPONSABILITÉ POUR LES


DOMMAGES. Vous pouvez obtenir de Microsoft et de ses fournisseurs une indemnisation en cas de dommages
directs uniquement à hauteur de 5,00 $ US. Vous ne pouvez prétendre à aucune indemnisation pour les autres
dommages, y compris les dommages spéciaux, indirects ou accessoires et pertes de bénéfices.
Cette limitation concerne:
• tout ce qui est relié au le contenu sous licence, aux services ou au contenu (y compris le code)
figurant sur des sites Internet tiers ou dans des programmes tiers; et.
• les réclamations au titre de violation de contrat ou de garantie, ou au titre de responsabilité
stricte, de négligence ou d’une autre faute dans la limite autorisée par la loi en vigueur.
MCT USE ONLY. STUDENT USE PROHIBITED
Elle s’applique également, même si Microsoft connaissait ou devrait connaître l’éventualité d’un tel dommage. Si
votre pays n’autorise pas l’exclusion ou la limitation de responsabilité pour les dommages indirects, accessoires
ou de quelque nature que ce soit, il se peut que la limitation ou l’exclusion ci-dessus ne s’appliquera pas à votre
égard.

EFFET JURIDIQUE. Le présent contrat décrit certains droits juridiques. Vous pourriez avoir d’autres droits
prévus par les lois de votre pays. Le présent contrat ne modifie pas les droits que vous confèrent les lois de votre
pays si celles-ci ne le permettent pas.

Revised July 2013


MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions xi
MCT USE ONLY. STUDENT USE PROHIBITED
xii Implementing Microsoft Azure Infrastructure Solutions

Acknowledgements
Microsoft Learning would like to acknowledge and thank the following for their contribution towards
developing this title. Their effort at various stages in the development has ensured that you have a good
classroom experience.

Marcin Policht – Subject Matter Expert/Content Developer


Marcin Policht obtained his Master of Computer Science degree 18 years ago. He has worked in the IT
field since then, focusing primarily on directory services, virtualization, system management, and database
management. Marcin authored the first book dedicated to Windows Management Instrumentation and
co-wrote several others on topics ranging from core operating system features to high-availability
solutions. His articles have been published on ServerWatch.com and DatabaseJournal.com. Marcin has
been a Microsoft MVP for the last seven years.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions xiii

Contents
Module 1: Introduction to Microsoft Azure
Module Overview 1-1
Lesson 1: Cloud technology overview 1-2

Lesson 2: Overview of Azure 1-6

Lesson 3: Managing Azure with the Azure portals 1-22


Lesson 4: Managing Azure with PowerShell 1-25

Lesson 5: Managing Azure with Azure CLI 1-31

Lesson 6: Overview of Azure deployment models 1-35


Lab: Managing Microsoft Azure 1-49

Module Review and Takeaways 1-50

Module 2: Implementing and managing Azure networking


Module Overview 2-1

Lesson 1: Overview of Azure networking 2-2

Lesson 2: Implementing and managing virtual networks 2-24


Lab A: Using a deployment template and Azure PowerShell to implement
Azure virtual networks 2-31

Lesson 3: Configuring an Azure virtual network 2-32

Lesson 4: Configuring virtual network connectivity 2-42

Lab B: Configuring VNet Peering 2-61

Module Review and Takeaways 2-62

Module 3: Implementing Microsoft Azure Virtual Machines and


virtual machine scale sets
Module Overview 3-1
Lesson 1: Overview of Virtual Machines and virtual machine scale sets 3-2

Lesson 2: Planning deployment of Virtual Machines


and virtual machine scale sets 3-5

Lesson 3: Deploying Virtual Machine and virtual machine scale sets 3-19

Lab: Deploying Virtual Machines 3-39

Module Review and Takeaways 3-40


MCT USE ONLY. STUDENT USE PROHIBITED
xiv Implementing Microsoft Azure Infrastructure Solutions

Module 4: Managing Azure VMs


Module Overview 4-1

Lesson 1: Configuring Azure VMs 4-2


Lesson 2: Managing disks of Azure VMs 4-10

Lesson 3: Managing and monitoring Azure VMs 4-17

Lab: Managing Azure VMs 4-28


Module Review and Takeaways 4-29

Module 5: Implementing Azure App Service


Module Overview 5-1

Lesson 1: Introduction to App Service 5-2


Lesson 2: Planning app deployment in App Service 5-12

Lesson 3: Implementing and maintaining web apps 5-17

Lesson 4: Configuring web apps 5-25

Lesson 5: Monitoring web apps and WebJobs 5-33

Lesson 6: Implementing Traffic Manager 5-38

Lab: Implementing web apps 5-43

Module Review and Takeaways 5-45

Module 6: Planning and implementing Azure Storage


Module Overview 6-1

Lesson 1: Planning storage 6-2

Lesson 2: Implementing and managing Azure Storage 6-13

Lesson 3: Exploring Azure hybrid storage solutions 6-27

Lesson 4: Implementing Azure CDNs 6-33

Lab: Planning and implementing Azure Storage 6-39

Module Review and Takeaways 6-41

Module 7: Implementing containers in Azure


Module Overview 7-1

Lesson 1: Implementing Windows and Linux containers in Azure 7-2

Lab A: Implementing containers on Azure VMs 7-14

Lesson 2: Implementing Azure Container Service 7-16

Lab B: Implementing Azure Container Service (AKS) 7-32

Module Review and Takeaways 7-33


MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions xv

Module 8: Planning and implementing backup and disaster recovery


Module Overview 8-1

Lesson 1: Planning for and implementing Azure Backup 8-3


Lesson 2: Overview of Azure Site Recovery 8-11

Lesson 3: Planning for Site Recovery 8-20

Lesson 4: Implementing Site Recovery with Azure as the disaster recovery site 8-29
Lab: Implementing Azure Backup and Azure Site Recovery 8-37

Module Review and Takeaways 8-38

Module 9: Implementing Azure Active Directory


Module Overview 9-1
Lesson 1: Creating and managing Azure AD tenants 9-2

Lesson 2: Configuring application access with Azure AD 9-16

Lesson 3: Overview of Azure AD Premium 9-24

Lab: Implementing Azure AD 9-31

Module Review and Takeaways 9-33

Module 10: Managing Active Directory infrastructure in hybrid and


cloud only scenarios
Module Overview 10-1

Lesson 1: Designing and implementing an Active Directory environment by


using Azure IaaS 10-2

Lesson 2: Implementing directory synchronization between AD DS and


Azure AD 10-8

Lesson 3: Implementing single sign-on in federated scenarios 10-28

Lab: Implementing and managing Azure AD synchronization 10-37

Module Review and Takeaways 10-38

Module 11: Using Microsoft Azure-based management, monitoring,


and automation
Module Overview 11-1

Lesson 1: Using Azure-based monitoring and management solutions 11-2

Lesson 2: Implementing Automation 11-17

Lesson 3: Implementing Automation runbooks 11-22

Lesson 4: Implementing Automation–based management 11-29


Lab: Implementing Automation 11-33

Module Review and Takeaways 11-34


MCT USE ONLY. STUDENT USE PROHIBITED
MCT USE ONLY. STUDENT USE PROHIBITED
About This Course xvii

About This Course


This section provides a brief description of the course, audience, suggested prerequisites, and course
objectives.

Course Description
This course teaches information technology (IT) professionals how to provision and manage services in
Microsoft Azure. Students will learn how to implement infrastructure components such as virtual
networks, virtual machines (VMs), web and mobile apps, and storage in Azure. Students also will learn how
to plan for and manage Azure Active Directory (Azure AD) and configure Azure AD integration with the
on-premises Active Directory domains.

Audience
This course is intended for IT professionals who are familiar with managing on-premises IT deployments
that include Active Directory Domain Services (AD DS), virtualization technologies, and applications.
Students typically work for organizations that are planning to locate some or all their infrastructure
services on Azure. This course also is intended for IT professionals who want to take the Microsoft
Certification Exam 70-533: Implementing Microsoft Azure Infrastructure Solutions.

Student Prerequisites
This course requires that you can meet the following prerequisites:

• Completed the Microsoft Certified Systems Administrator (MCSA) certification in Windows Server
2012 or Windows Server 2016.
• Understanding of on-premises virtualization technologies, including: VMs, virtual networking, and
virtual hard disks.
• Understanding of network configuration, including: TCP/IP, Domain Name System (DNS), virtual
private networks (VPNs), firewalls, and encryption technologies.

• Understanding of websites, including: how to create, configure, monitor, and deploy a website on
Internet Information Services (IIS).

• Understanding of Active Directory concepts, including: domains, forests, domain controllers,


replication, Kerberos protocol, and Lightweight Directory Access Protocol (LDAP).

• Understanding of resilience and disaster recovery, including backup and restore operations.

Course Objectives
After completing this course, students will be able to:
• Describe Azure architecture components, including infrastructure, tools, and portals.

• Implement and manage virtual networking within Azure and configure cross-premises connectivity.

• Plan and create Azure VMs and virtual machine scale sets.
• Configure, manage, and monitor Azure VMs to optimize availability and reliability.

• Implement Azure App Service.

• Plan and implement Azure storage.

• Implement container-based workloads in Azure.

• Plan and implement Azure Backup and disaster recovery.


MCT USE ONLY. STUDENT USE PROHIBITED
xviii About This Course

• Implement Azure AD.

• Manage an Active Directory infrastructure in a hybrid or cloud only environment.

• Manage, monitor, and automate operations in Azure.

Course Outline
The course outline is as follows:

Module 1. Introduction to Microsoft Azure

This module introduces cloud solutions in general and then focuses on the services that Azure offers. The
module goes on to describe the portals that you can use to manage Azure subscriptions and services
before introducing the Azure PowerShell modules and Azure Command Line Interface (CLI) as scripting
technologies for managing Azure. Finally, the module provides explanations and guidance for the use of
the classic and Azure Resource Manager deployment models.

Module 2. Implementing and managing Azure networking

This module explains how to plan virtual networks in Azure and implement and manage virtual networks.
It also explains how to configure cross-premises connectivity and connectivity between virtual networks in
Azure. Additionally, it explains how to configure an Azure virtual network and provides an overview of
Azure classic networking.
Module 3. Implementing Microsoft Azure Virtual Machines and virtual machine scale sets

This module introduces the fundamentals of Azure VMs and virtual machine scale sets and discusses the
different ways in which you can deploy and manage them.
Module 4. Managing Azure VMs

This module explains how to configure and manage Azure VMs, including configuring virtual machine
disks and monitoring Azure VMs.
Module 5. Implementing Azure App Service

This module explains the different types of apps that you can create by using the Azure App Service, and
how you can select an App Service plan and deployment method for apps in Azure. It also explains how to
use Microsoft Visual Studio, File Transfer Protocol (FTP) clients, and Azure PowerShell, and Azure CLI to
deploy Azure web and mobile apps. Additionally, the module explains how to configure web apps and
use the Azure WebJobs feature to run custom tasks. It also explains how to monitor the performance of
web apps. Lastly, this module explains how to use Azure Traffic Manager to distribute requests between
two or more app services.

Module 6. Planning and implementing Azure storage


This module explains how to plan and implement storage services. It explains how to choose appropriate
Azure Storage options to address business needs and how to implement and manage Azure Storage. It
also explains how to improve web-application performance by implementing Azure Content Delivery
Networks (CDNs).

Module 7. Implementing containers in Azure

This module explains how to implement containers in Azure. It starts by introducing the concept of
containers and presents different options for implementing containers on Windows and Linux Azure VMs.
Next, it explains container orchestration in the context of Azure Container Service (ACS) and describes
how to use ACS to deploy Docker Swarm, Kubernetes, and DC/OS clusters.
MCT USE ONLY. STUDENT USE PROHIBITED
About This Course xix

Module 8. Planning and implementing backup and disaster recovery

This module explains about the different types of scenarios that Azure Backup and Azure Site Recovery
support. This includes the process of configuring backup in on-premises and cloud environments and
about planning Azure Site Recovery deployments.

Module 9. Implementing Azure Active Directory

This module explains how to implement Azure AD. It explains how to create and manage Azure AD
tenants. It also explains how to configure single sign-on (SSO) for cloud applications and resources and
implement Azure Role-Based Access Control (RBAC) for cloud resources. Lastly, this module explains the
functionality of Azure AD Premium, and how to implement Azure Multi-Factor Authentication.

Module 10. Managing Active Directory infrastructure in hybrid and cloud only scenarios

This module explains how to manage Active Directory in a hybrid environment. It explains how to extend
an on-premises Active Directory domain to Azure infrastructure as a service (IaaS) environments and
synchronize user, group, and computer accounts between on-premises AD DS and Azure AD. This module
also explains how to set up SSO by using federation and pass-through authentication between on-
premises Active Directory and Azure AD.

Module 11. Using Azure-based management, monitoring, and automation


This module explains how to implement Azure-based management and automation. It explains how to
implement monitoring solutions and Azure Automation. This module also describes how to create
different types of Azure Automation runbooks and implement Azure Automation-based management by
using runbooks.
MCT USE ONLY. STUDENT USE PROHIBITED
xx About This Course

Exam/Course Mapping
The following materials are included with your kit:

Exam/Course Mapping
This course, 20533E: Implementing Microsoft Azure Infrastructure Solutions, has a direct mapping of its
content to the objective domain for the Microsoft Exam 70-533: Implementing Microsoft Azure
Infrastructure Solutions. The following table is a study aid that will assist you in preparation for taking
Exam 70-533 by showing you how the exam objectives and the course content fit together. The course is
not designed exclusively to support the exam but also provides broader knowledge and skills to allow a
real-world implementation of the technology and will utilize the unique experience and skills of your
qualified Microsoft Certified Trainer.

Note: The exam objectives are available online at: http://www.microsoft.com/learning


/en-us/exam-70-533.aspx, under “Skills Measured.”

Taking this course does not guarantee that you will automatically pass any certification exam. In addition
to attending this course, you also should have the following:

• Real-world, hands-on experience administering a Windows Server 2012 infrastructure

• Additional study outside of the content in this handbook

Additional study and preparation resources, such as practice tests, may also be available for you to
prepare for this exam. Details of these additional resources are available at
http://www.microsoft.com/learning/en-us/exam-70-533.aspx, under “Preparation options.”

You also should check out the Microsoft Virtual Academy, http://www.microsoftvirtualAcademy.com to
view further additional study resources and online courses, which are available to assist you with exam
preparation and career development.

To ensure you are sufficiently prepared before taking the certification exam, you should familiarize
yourself with the audience profile and exam prerequisites. The complete audience profile for this exam is
available at http://www.microsoft.com/learning/en-us/course.aspx?ID=20533E, under “Overview,
Audience Profile.”

The following materials are included with your kit:

• Course Handbook is a succinct classroom learning guide that provides the critical technical
information in a crisp, tightly focused format, which is essential for an effective in-class learning
experience.

You may be accessing either a printed course handbook or digital courseware material via the Skillpipe
reader by Arvato. Your Microsoft Certified Trainer will provide specific details, but both printed and digital
versions contain the following:

• Lessons guide you through the learning objectives and provide the key points that are critical to the
success of the in-class learning experience.

• Labs provide a real-world, hands-on platform for you to apply the knowledge and skills learned in
the module.

• Module Reviews and Takeaways sections provide on-the-job reference material to boost
knowledge and skills retention.
• Lab Answer Keys provide step-by-step lab solution guidance.
MCT USE ONLY. STUDENT USE PROHIBITED
About This Course xxi

Additional Reading: Course Companion Content on the https://aka.ms


/Companion-MOC website. This is searchable, easy-to-browse digital content with integrated
premium online resources that supplement the Course Handbook.

• Modules. Modules include companion content, such as questions and answers, detailed
demonstrations steps, and additional reading links for each lesson. Additionally, modules include Lab
Review questions and answers and Module Reviews and Takeaways sections, which contain the review
questions and answers, best practices, common issues and troubleshooting tips with answers, and
real-world issues and scenarios with answers.

• Resources. Resources include well-categorized additional resources that give you immediate access
to the current premium content on TechNet, MSDN, and Microsoft Press.
• Course Evaluation. At the end of the course, you will have the opportunity to complete an online
evaluation to provide feedback on the course, training facility, and instructor.

o To provide additional comments or feedback, or to report a problem with course resources, visit
the Training Support site at https://trainingsupport.microsoft.com/en-us. To inquire about the
Microsoft Certification Program, send an e-mail to certify@microsoft.com.
MCT USE ONLY. STUDENT USE PROHIBITED
xxii About This Course

Virtual Machine Environment


This section provides the information for setting up the classroom environment to support the course’s
business scenario.

Virtual Machine Configuration


In this course, you will perform the labs using virtual machines built in Microsoft Hyper-V.

Important: Pay close attention to the steps at the end of each lab that explain what you
need to do with the virtual machines. In most labs, you will revert the virtual machine to the
checkpoint that you create during classroom setup. In some labs, you will not revert the virtual
machines, but will keep them running for the next lab.

The following table shows the role of each virtual machine that you will use in this course.

Virtual machine Role

20533E-MIA-CL1 Windows 10 standalone client with the


Microsoft Azure management tools installed

MT17B-WS2016-NAT Internet gateway

Software Configuration
The following software is installed on the virtual machines:
• Microsoft SQL Server 2016 SP1 Express

• SQL Server Management Studio

• Microsoft Visual Studio Community 2015


• Azure Cross Platform Command Line Tools

Classroom Setup
Each classroom computer will have the same virtual machine environment.

You may be accessing the lab virtual machines either in a hosted online environment with a web browser,
or by using Hyper-V on a local machine. The labs and virtual machines are the same in both scenarios;
however, there may be some slight variations because of hosting requirements. Any discrepancies will be
pointed out in the Lab Notes on the hosted lab platform.

Your Microsoft Certified Trainer will provide details about your specific lab environment.

Microsoft Azure
This course contains labs that require access to Microsoft Azure. You will be provided with a Microsoft
Learning Azure Pass to facilitate access to Microsoft Azure. Your Microsoft Certified Trainer will provide
details of how to acquire, set up, and configure your Microsoft Azure access.

You should be aware of some general best practices when using your Microsoft Learning Azure Pass:
• Once you have set up your Microsoft Learning Azure Pass subscription, check the dollar balance of
your Azure Pass within Microsoft Azure and be aware of how much you are consuming as you
proceed through the labs.
MCT USE ONLY. STUDENT USE PROHIBITED
About This Course xxiii

• Do not allow Microsoft Azure components to run overnight or for extended periods unless you need
to, as this will use up the pass dollar amount unnecessarily.

• After you finish your lab, remove any Microsoft Azure–created components or services such as
storage, virtual machines, or cloud services, to help minimize cost usage and extend the life of your
Microsoft Learning Azure Pass.

Important: You may use your own full or trial Microsoft Azure subscription if you wish but
note that the labs have not been tested with all subscription types. Therefore, while unlikely, it is
possible some variations could exist due to some subscription limitations. In addition, be aware
that the scripts used in the labs will delete any existing services or components present in
Microsoft Azure under the subscription that you use.

Course Hardware Level


To ensure a satisfactory student experience, Microsoft Learning requires a minimum equipment
configuration for trainer and student computers in all Microsoft Learning Partner classrooms in which
Official Microsoft Learning Product courseware is taught.

The instructor and student computers must meet the following hardware requirements:
• Intel Virtualization Technology (Intel VT) or AMD Virtualization (AMD-V) processor

• Dual 120-gigabyte (GB) hard disks 7200 RM Serial ATA (SATA) or better*

• 16 GB of random access memory (RAM)


• DVD drive

• Network adapter

• Super VGA (SVGA) 17-inch monitor

• Microsoft mouse or compatible pointing device

• Sound card with amplified speakers

• Striped
In addition, the instructor computer must be connected to a projection display device that supports
SVGA 1024 x 768 pixels, 16-bit colors.
MCT USE ONLY. STUDENT USE PROHIBITED
MCT USE ONLY. STUDENT USE PROHIBITED
1-1

Module 1
Introduction to Microsoft Azure
Contents:
Module Overview 1-1
Lesson 1: Cloud technology overview 1-2

Lesson 2: Overview of Azure 1-6

Lesson 3: Managing Azure with the Azure portals 1-22


Lesson 4: Managing Azure with PowerShell 1-25

Lesson 5: Managing Azure with Azure CLI 1-31

Lesson 6: Overview of Azure deployment models 1-35

Lab: Managing Microsoft Azure 1-49

Module Review and Takeaways 1-50

Module Overview
Organizations are increasingly moving IT workloads to the cloud, so IT professionals must understand the
principles of cloud solutions. They must also learn how to deploy and manage cloud apps, services, and
infrastructure. IT professionals who are planning to use Microsoft Azure must learn about the services that
Azure provides and how to manage them.
This module introduces cloud solutions in general and then focuses on the services that Azure offers. The
module goes on to describe the portals that you can use to manage Azure subscriptions and services,
before introducing the Azure PowerShell modules and Azure CLI as scripting technologies for managing
Azure. Finally, the module explains the use of Azure Resource Manager and presents an overview of Azure
management services.

Objectives
After completing this module, you will be able to:
• Identify suitable apps for the cloud.

• Identify the services and capabilities that Azure provides.

• Use Azure portals to manage Azure services and subscriptions.


• Use Azure PowerShell to manage Azure services and subscriptions.

• Use Azure CLI to manage Azure services and subscriptions.

• Use Azure Resource Manager to manage Azure resources.


MCT USE ONLY. STUDENT USE PROHIBITED
1-2 Introduction to Microsoft Azure

Lesson 1
Cloud technology overview
Cloud computing plays an increasingly important role in IT infrastructure, and IT professionals need to be
aware of fundamental cloud principles and techniques. This lesson introduces the cloud and describes
considerations for implementing cloud-based infrastructure services.

Lesson Objectives
After completing this lesson, you will be able to:

• Prepare the lab environment.

• Describe the key principles of cloud computing.


• Identify common types of cloud services.

Demonstration: Preparing the lab environment


Perform the tasks in this demonstration to prepare the lab environment. The environment will be
configured while you progress through this module.

Important: The scripts used in this course might delete objects that you have in your
subscription. Therefore, you should complete this course by using a new Azure subscription. You
should also use a new Microsoft account that is not associated with any other Azure subscription.
This will eliminate the possibility of any potential confusion when running setup scripts.

This course relies on custom Azure PowerShell modules, including Add-20533EEnvironment to prepare
the lab environment, and Remove-20533EEnvironment to perform clean-up tasks at the end of the
module.

Introduction to cloud computing


Cloud computing, or the cloud, has become a
leading trend in IT. However, its definition is
ambiguous, and some of the terminology related
to it is confusing. Trying to define the cloud in
purely technological terms is difficult—it is best to
think of it an abstract concept that encapsulates
techniques used to provide computing services
from a pool of shared resources.

Most cloud solutions use virtualization technology,


which abstracts physical hardware as a layer of
virtualized resources for processing, memory,
storage, and networking. Many cloud solutions
add further layers of abstraction to define specific services that you can provision and use.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-3

As the National Institute of Standards and Technology has identified, cloud computing solutions exhibit
the following five characteristics, regardless of the specific technologies that organizations use to
implement them:

• On-demand self-service. Cloud services are generally provisioned as they are required and they need
minimal infrastructure configuration by the consumer. Therefore, cloud services users can quickly set
up the resources they want, typically without having to involve IT specialists.

• Broad network access. Consumers access cloud services over a network connection from different
locations, usually either a corporate network or the internet.

• Resource pooling. Cloud services use a pool of hardware resources that consumers share. A hardware
pool consists of hardware from multiple servers that are arranged as a single logical entity.

• Rapid elasticity. Cloud services scale dynamically to obtain additional resources from the pool as
workloads intensify, and they release resources automatically when no need for them exists.

• Measured service. Cloud services include metering capabilities, which allow you to track resource
usage by consumers. This facilitates the usage-based billing model, where service cost reflects
utilization levels.

Advantages of cloud computing


Cloud computing has several advantages over traditional, datacenter-based computing, including the
following:

• A managed datacenter. With cloud computing, your service provider can manage your datacenter.
This obviates the need for you to manage your own IT infrastructure. With cloud computing, you can
also access computing services irrespective of your location and the hardware that you use to access
those services. Although the datacenter remains a key element in cloud computing, the emphasis is
on virtualization technologies that focus on delivering apps rather than on infrastructure.
• Reduced or even eliminated capital expenditure. With cloud providers owning and managing
datacenters, organizations no longer require their own infrastructure for deploying and managing
virtualized workloads.
• Lower operational costs. Cloud computing provides pooled resources, elasticity, and virtualization
technology. These factors help you to alleviate issues such as low system use, inconsistent availability,
and high operational costs. It is important to remember that with cloud computing, you pay for only
the services that you use; this can mean substantial savings on operational costs for most
organizations.
• Server consolidation. You can consolidate servers across the datacenter by using the cloud computing
model, because it can host multiple virtual machines on a virtualization host.

• Better flexibility and speed. You can address changing business needs efficiently by rapidly scaling
your workloads, both horizontally and vertically, and deploying new solutions without infrastructure
constraints.

Public, private, and hybrid clouds


Cloud computing uses three main deployment models:
• Public cloud. Public clouds are infrastructure, platform, or application services that a cloud service
provider delivers for access and consumption by multiple organizations. With public cloud services,
the organization that signs up for the service does not have the management overhead that the
private cloud model requires. However, this also means that the organization has less control of the
infrastructure and services, because the service provider manages them for the organization. In
addition, the public cloud hosts the infrastructure and services for multiple organizations
(multitenant), so you should consider the potential data sovereignty implications of this model.
MCT USE ONLY. STUDENT USE PROHIBITED
1-4 Introduction to Microsoft Azure

• Private cloud. Individual organizations privately own and manage private clouds. Private clouds offer
benefits similar to those of public clouds, but are designed and security-enhanced for a single
organization’s use. The organization manages and maintains the infrastructure for the private cloud in
its datacenter. One of the key benefits of this approach is that the organization has complete control
over the cloud infrastructure and services that it provides. However, this model requires additional
management and increases costs for the organization.

• Hybrid cloud. In a hybrid cloud, a technology binds two separate clouds (public and private) together
for the specific purpose of obtaining resources from both. You decide which elements of your services
and infrastructure to host privately and which to host in the public cloud.

Many organizations use a hybrid model when extending to the cloud; that is, when they begin to shift
some elements of their apps and infrastructure to the cloud. Sometimes, an organization shifts an app
and its supporting infrastructure to the cloud while maintaining the underlying database within its
own infrastructure. This approach might help keep that database more secure.

Types of cloud services


Cloud services generally fall into one of the
following three categories:

• Software as a service (SaaS)

• Platform as a service (PaaS)

• Infrastructure as a service (IaaS)

SaaS
SaaS offerings consist of fully formed software
apps that are delivered as cloud-based services.
Users can subscribe to the service and use the app,
normally through a web browser or by installing a
client-side app. Examples of Microsoft SaaS services include Microsoft Office 365, Skype for Business, and
Microsoft Dynamics 365. The primary advantage of SaaS services is that they enable users to access apps
without having to install and maintain them. Typically, users do not have to worry about updating apps
and maintaining compliance, because the service provider handles tasks such as these.

PaaS
PaaS offerings consist of cloud-based services that provide resources on which developers can build their
own solutions. Typically, PaaS encapsulates fundamental operating system capabilities, including storage
and compute, in addition to functional services for custom apps. Usually, PaaS offerings provide
application programming interfaces (APIs), in addition to configuration and management user interfaces.
With PaaS, developers and organizations can create highly scalable custom apps without having to
provision and maintain hardware and operating system resources. Examples of PaaS services include
Azure App Service, which provides a runtime environment for a web app or mobile app that your
development team creates.

IaaS
IaaS offerings provide virtualized server and network infrastructure components that can be easily
provisioned and decommissioned as required. Typically, you manage IaaS facilities as you would manage
on-premises infrastructures. IaaS facilities provide an easy migration path for moving existing apps to the
cloud.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-5

Note that an infrastructure service might be a single IT resource—such as a virtual server with a default
installation of Windows Server 2016 and Microsoft SQL Server 2016, or a Linux server with MySQL Server
installed to provide database services—or it might be a complete infrastructure environment for a specific
app or business process. For example, a retail organization might empower departments to provision their
own database servers to use as data stores for custom apps. Alternatively, the organization might define a
set of virtual machine and network templates that can be provisioned as a single unit. These templates
would implement a complete, preconfigured infrastructure solution for a branch or store, including all the
required apps and settings.

Other “as a service” offerings


As cloud services continue to evolve, other IT functions are being presented as packaged cloud services.
Some examples of these include:

• Identity as a service (IDaaS). IDaaS provides identity management services in a packaged product,
usually for resale to customers. For example, in Azure, Azure Active Directory (Azure AD) provides
identity and access management that integrate with Azure services and apps, whereas Azure AD
Business-to-Consumer (B2C) provides consumer identity management.

• Disaster recovery as a service (DRaaS). DRaaS provides cloud-based backup and recovery services that
are consumable on a pay-per-use model, highly available, and scalable to meet demand. The most
prominent example of this type of service in Azure is Azure Site Recovery.

Question: What advantages does a hybrid cloud model present to an organization that is
new to Azure?
MCT USE ONLY. STUDENT USE PROHIBITED
1-6 Introduction to Microsoft Azure

Lesson 2
Overview of Azure
Azure is a cloud offering from Microsoft that individuals and organizations can use to create, deploy, and
operate cloud-based apps and services. This lesson provides an overview of Azure, explains the datacenter
infrastructure that supports it, and describes the services, resources, and tools that are available in Azure.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe the key characteristics of Azure datacenters.

• Explain the Azure service model.


• Locate Azure-related information.

• Provide an overview of Azure services.

• Identify Azure compute hosting options.

• Describe the Azure deployment models.

• Identify Azure management tools.

Understanding Azure datacenters


Datacenters managed by Microsoft host Azure
services throughout the world. Whenever you
create a new Azure service, you must select an
Azure region to determine the datacenter where
the service will run. When you select an Azure
region, you should consider the location of the
service’s users and place the service as close to
them as possible. Some services enable you to
serve content from more than one Azure region.
In this way, you can serve content to a global
audience while helping to ensure that a local
response gives them the highest possible
performance. At the time of authoring this course, these datacenters, including the newly announced
ones, are in the following geographic areas:
• Americas

o East US

o East US 2

o Central US

o North Central US

o South Central US

o West Central US

o West US

o West US 2
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-7

o US Gov Virginia

o US Gov Iowa

o US Gov Arizona

o US Gov Texas

o US DoD East

o US DoD Central

o US Sec East

o US Sec West

o Canada East

o Canada Central

o Brazil South

• Europe

o North Europe

o West Europe
o Germany West Central

o Germany North

o Germany Central
o Germany Northeast

o UK South

o UK West
o France Central

o France South

o Switzerland North
o Switzerland West

• Asia Pacific

o Southeast Asia
o East Asia

o Australia East

o Australia Southeast

o Australia Central 1

o Australia Central 2

o China East
o China North

o Central India

o South India

o West India
MCT USE ONLY. STUDENT USE PROHIBITED
1-8 Introduction to Microsoft Azure

o Japan East

o Japan West

o Korea Central

o Korea South

• Africa and Middle East

o South Africa West

o South Africa North

o United Arab Emirates (UAE) Central

o UAE North

Datacenter placement follows the principle of pairing, by which each datacenter has its counterpart in the
same geographical area. The exception is the Brazil South region, which pairs with the South Central US
region. This pairing arrangement facilitates designing and implementing cloud-based disaster-recovery
solutions, while retaining all services in the same geographical location. Governments and regional
organizations often must comply with this requirement due to regulatory, compliance, and data-
sovereignty rules. Additionally, Azure datacenter disaster-recovery and maintenance procedures utilize
this pairing to minimize the potential impact of an incident that affects multiple regions. When deciding
where to deploy your Azure services, you should consider datacenter pairing.

Some of the Azure regions offer an extra level of high availability by implementing Availability Zones.
Zones represent multiple, separate physical locations within the same region, each with its own
independent infrastructure, including power, cooling, and networking. Several Azure services can take
advantage of Availability Zones depending on the zone-integration capabilities:

• Zonal services, such as Azure virtual machines, virtual machine scale sets, managed disks, or public IP
addresses, support deployment to a specific zone.
• Zone-redundant services, such as Azure Storage or Azure SQL Database, support automatic
replication across zones.

To implement resilient workloads in Azure, you should consider combining the benefits of Azure region
pairing and Availability Zones.

Additional Reading: For more information regarding Availability Zones, refer to:
“Overview of Availability Zones” at: https://aka.ms/Hru1gi

The architectural design of Azure datacenters evolved through several generations. The latest generation
features a fully modular design that adheres to the following principles:

• Microsoft packages clusters of servers into preassembled units enclosed in shipping containers,
enabling clusters that contain thousands of servers to be rapidly provisioned and swapped out.

• The datacenters include uninterruptable power supplies and alternate power supplies for all
components, in addition to backup power that can keep the datacenter running in the event of a
localized disaster.

• Redundant high-speed networks connect the clusters within datacenters.

• High-speed optical networks connect the datacenters to each another and to the internet.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-9

• The data within a single datacenter can be replicated to three redundant storage clusters for high
availability and between pairs of datacenters in the same geopolitical area for disaster recovery.

• The physical and network security of Azure datacenters meets a wide range of industry and
government standards.

The datacenters minimize power and water usage for maximum efficiency. These reductions apply to
servers and networking hardware, cooling equipment, and other infrastructure facilities.

The servers in each datacenter are provisioned in clusters, and each cluster includes multiple racks of
servers. A distributed management service built into the platform handles provisioning, dynamic scaling,
and hardware fault management for the virtual servers that host cloud services on the physical servers in
the clusters.

Additional Reading: For more information, including an up-to-date listing of Azure


regions, refer to: “Azure Regions” at: http://aka.ms/Ym4ryz

Understanding the Azure service model


Multitenancy within a scalable and highly available
cloud-based infrastructure forms the basis of the
Azure service model. Two factors define a
subscriber’s usage of Azure services: the
subscription model that determines the scope of
available services and the billing model that
determines the cost of these services. Azure
services are primarily pay-per-use, with charges
reflecting the extent to which these services
consume cloud resources.

Accounts and subscriptions


An Azure account represents a collection of one or
more subscriptions. An Azure account determines how and to whom Azure reports subscription usage. A
subscription constitutes the administrative and billing boundary within an account, which means that:

• From the management standpoint, you can delegate privileges up to the subscription level.

• From the billing standpoint, the cost of individual Azure services rolls up to the subscription level.

Each subscription also is subject to quotas, which determine the maximum quantity of services and
resources that can reside in the same subscription. These limits typically apply on per-subscription and
per-region levels.

Additional Reading: For a comprehensive and up-to-date listing of Azure subscription


limits and quotas, refer to: “Azure subscription and service limits, quotas, and constraints” at:
https://aka.ms/lxo0an

To implement Azure services, you must have a subscription. You can sign up for a subscription as an
individual or as an organization. The sign-up process creates an Azure account, if you do not have one,
and it creates a subscription within that account. If you have an existing account, you can add multiple
subscriptions to it.
MCT USE ONLY. STUDENT USE PROHIBITED
1-10 Introduction to Microsoft Azure

Signing in to Azure
To manage Azure resources within a subscription, you first need to authenticate. The most common
authentication methods involve using either of the following types of accounts:

• A Microsoft Account

• A work or school account (formerly referred to as an organizational account)

Work or school accounts differ from Microsoft accounts because they are defined in Azure Active
Directory (Azure AD). Every Azure subscription is associated with an Azure AD tenant that can host these
accounts.

Administrative roles and role-based access control (RBAC)


Azure provides three built-in account and subscription-level administrative roles:

1. Account Administrator. There is one Account Administrator for each Azure account. The Account
Administrator can access the web portal referred to as the Azure Account Center. This enables the
Account Administrator to perform billing and administrative tasks, such as creating subscriptions,
canceling subscriptions, changing the billing method for a subscription, or changing the designated,
subscription-level administrative account known as Service Administrator.

Note: Only the person with the Account Administrator role can access the corresponding
account in the Account Center. However, the Account Administrator does not have access to
resources in any subscriptions in the account.

Additional Reading: The Account Center is accessible from


https://account.windowsazure.com

2. Service Administrator. There is one Service Administrator for each Azure subscription. Initially, the
Service Administrator is the only account that can create and manage resources within the
subscription. By default, if you create a new subscription in a new account by using a Microsoft
account, your account serves as both the Account Administrator and the Service Administrator.

3. Co-Administrator. The Service Administrator can create up to 200 Co-Administrators for each Azure
subscription. Co-Administrators have full permissions to create and manage Azure resources in the
same subscription, but they cannot revoke Service Administrator privileges or grant Co-Administrator
privileges to others. They also cannot change the association of the current subscription to its Azure
AD tenant. Such changes require Service Administrator privileges.

To comply with the principle of least privilege, you should avoid relying on Co-Administrators for
delegation of your subscription management. Instead, you should grant a minimum required set of
permissions by using role-based access control (RBAC).

RBAC allows you to provide granular access to perform specific actions on Azure resources, down to an
individual-resource level. You can specify which actions to perform by using either a predefined or a
custom role. Once you have decided which role to use, you assign it to an Azure AD object representing
the user, group, or application that should be able to carry out the role’s associated actions.

Note: You will learn more about RBAC in Module 11, “Implementing Azure-based
management, monitoring, and automation.”
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-11

Pricing and billing


There are several basic pricing and billing options for Azure:

• Pay-As-You-Go. Choose this option if you want a flexible pricing plan. You pay only for the services
that you use. You can cancel this subscription at any time. You can make payments by using credit or
debit cards, or via invoicing, if approved.

Additional Reading: For more information, refer to: “Pay-As-You-Go” at:


http://aka.ms/Uis9fx

• Buying from a Microsoft reseller. This option allows you to work with the same resellers from whom
you currently purchase Microsoft software under the Microsoft Open License program. Start by
purchasing Azure in Open credits. You can then use these credits to activate your subscription and
apply them toward any Azure service that is eligible for monetary commitments when purchased
online. Alternatively, you can use this option to purchase an Azure subscription from a Cloud Solution
Provider (CSP). Additionally, you can include any value-added services that are part of the offering
that the CSP delivers.

Additional Reading: For more information, refer to: “Get Started with Azure in Open
Licensing” at: http://aka.ms/Mq0oy5

• Enterprise Agreement. This option is best suited for organizations with at least 250 users and devices.
Enterprise Agreement involves making an upfront commitment to purchase Azure services.
Customers who select this option rely on the Enterprise portal to administer their subscription.
Microsoft bills these customers annually. Customers can adjust the scope of the agreement towards
the end of each billing period. This option makes it easier to accommodate unplanned growth.

Additional Reading: For more information, refer to: “Licensing Azure for the Enterprise” at:
http://aka.ms/Br93cj

• Azure Hybrid Benefit. Customers with Software Assurance qualify for discounts on Azure virtual
machines (VMs) running Windows Server by leveraging their existing on-premises licenses.

Additional Reading: For more information about Microsoft Azure Hybrid Benefit, refer to:
“Azure Hybrid Benefit” at: https://aka.ms/pc0s73

• Azure Reserved Virtual Machine Instances. Customers can benefit from significantly lower pricing of
Azure virtual machines of a particular family and within a specific Azure region by prepaying for their
usage over a one-year or three-year term. Customers can combine Azure Reserved Virtual Machine
Instances with Azure Hybrid Benefit for a savings of up to 82 percent.

Additional Reading: For more information about Microsoft Azure Reserved Virtual
Machine Instances, refer to: “Azure Reserved VM Instances (RIs)” at: https://aka.ms/ef58xp
MCT USE ONLY. STUDENT USE PROHIBITED
1-12 Introduction to Microsoft Azure

Microsoft also provides several benefits to members of specific programs, such as Microsoft Developers
Network (MSDN), the Microsoft Partner network, and BizSpark:

• MSDN. Members receive monthly credits toward their Azure subscription for services that they use for
development purposes.

• Partner. Partners receive monthly credits toward their Azure subscription and receive access to
resources to help expand their cloud practice.

• BizSpark. Members receive monthly credits toward their Azure subscription.

Additional Reading: For more information about members’ benefits, refer to: “Member
Offers” at: https://aka.ms/H0y8qt

Support plans
You can also purchase support plans from Microsoft that provide varying levels of support for your Azure
environment. You can choose from the following support plans:

• Developer. The Developer plan is designed for test or nonproduction environments. It includes
technical support for Azure during business hours with an initial response time of less than eight
hours.

• Standard. The Standard plan offers the same features as the Developer plan, and the initial response
time is less than two hours.
• Professional Direct. This plan is designed for organizations that depend on Azure for business-critical
apps or services. It includes the same features as the Standard plan in addition to basic advisory
services, pooled support account management, escalation management, and an initial response time
of less than one hour.

• Premier. This is the highest level of support that includes all Microsoft products, in addition to Azure.
With Premier, you receive customer-specific advisory services, a dedicated support account manager
and a response time of less than 15 minutes, in addition to all the Professional Direct features.

Additional Reading: For more information, refer to: “Azure support plans” at:
http://aka.ms/N613e7

Azure resource pricing


In general, cloud technologies enable you to minimize or even eliminate capital expenditures. They can
also help lower your operational costs. Azure is no exception, and its pricing model reflects this.

Azure compute-related charges are usually calculated, depending on the service type, on a per-second or
per-minute basis, and they reflect actual usage. For example, when you deploy Azure VMs, the
corresponding cost reflects the time during which they are running. These charges apply whenever a
virtual machine is running, but terminate as soon as you stop it and the platform deallocates its resources.
Another, smaller part of virtual-machine cost reflects the usage of Azure Storage for virtual machine disk
files. Charges for storage allocated to the virtual machine disk files apply regardless of the state of the
virtual machine.
Microsoft offers most Azure services in several pricing tiers, to accommodate different customer needs
and facilitate vertical scaling. By implementing vertical scaling, customers can increase or decrease
processing power and service capacity. They can also implement horizontal scaling to meet fluctuating
demand. In either case, customers can optimize usage charges by adjusting pricing tier of an existing
service.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-13

Pricing also might vary depending on the region in which your services reside. In addition, for licensed
products, pricing depends on the licensing model that you choose.

Additional Reading: For more information, refer to: “Azure pricing” at:
http://aka.ms/Svvfpj

To estimate the cost of Azure services that you plan to provision, you can use the Azure pricing calculator.
This web-based tool allows you to pick several types of Azure services, and specify settings for each, such
as their projected usage (in hours, weeks, or months), pricing tier, target Azure region, billing and support
options, and licensing program. Then, based on this information, the pricing calculator will provide an
overall cost of your solution.

Additional Reading: Azure pricing calculator is available at: https://aka.ms/lyvi3b

Azure Cost Management


To optimize the usage of your resources in the most cost-effective manner, you should consider
implementing Azure Cost Management. This service monitors your resources and identifies opportunities
to help you minimize resource-related charges. It also generates alerts regarding any unusual usage
patterns or resource charges exceeding a specified threshold. It provides spending forecasts, helping you
with long-term budget projections.

Additional Reading: For more information regarding Azure Cost Management, refer to:
“Azure Cost Management Documentation” at: https://aka.ms/E7tvtu

Locating Azure-related information and resources


Microsoft provides resources that facilitate the
implementation and management of your Azure
environment:

• Microsoft Azure at
https://azure.microsoft.com. This website,
owned and managed by Microsoft, hosts the
most comprehensive repository of
information on Azure and Azure-related
topics. The information includes:

o A high-level overview of all Azure


products and services.

o Detailed documentation describing all Azure products and services.

o Description of solutions that use Azure services and non-Microsoft applications.

o Details regarding Azure pricing and Azure billing options.

o Azure training resources and description of Azure certifications.

o Azure Marketplace. The Azure Marketplace contains thousands of certified, open-source, and
community-provided resources. You can use it to deploy preconfigured virtual machines,
download developer tools, and provision a wide variety of apps and application programming
interfaces (APIs).
MCT USE ONLY. STUDENT USE PROHIBITED
1-14 Introduction to Microsoft Azure

o Azure partner directory.

o Azure support knowledge base.

o Azure-related blogs.

o Azure Trust Center. The Azure Trust Center provides information and guidance around security,
privacy, and compliance in Azure.

• GitHub at https://github.com. GitHub contains APIs, software development kits (SDKs), and open-
source projects. This includes content that Microsoft and the Azure community have created.
Developers can leverage GitHub resources in their projects to save time and effort and upload their
own code for others to reuse.

Demonstration: Locating Azure-related resources


In this demonstration, you will see how to:

• View resources in the Azure Marketplace.

• View Azure-related information on GitHub.


• View information in the Azure Trust Center.

Understanding Azure services


Azure provides a wide range of cloud-based
services that you can use to design and implement
your customized cloud solutions and
infrastructure. Those services include:

• Compute, which provides the following


options:

o Virtual Machines. Create Windows and


Linux virtual machines from predefined
templates or deploy your own custom
server images in the cloud.

o Azure Virtual Machine Scale Sets.


Provision highly available and automatically scalable groups of Windows and Linux virtual
machines.

o Azure Functions. Respond to events with serverless code.

o Azure Container Service (AKS). Deploy managed Kubernetes-based clusters of containers.

o Container Instances. Provision containers without having to provision and manage virtual
machines.

o Azure Batch. Run high-volume, large-scale parallel and high-performance computing apps on a
scaled and managed set of virtual machines.

o Azure Service Fabric. Build and manage distributed applications by using small, specialized
software components, known as microservices.

o Azure Cloud Services. Define multitier PaaS cloud services that you can deploy and manage on
Azure.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-15

• Web & Mobile, which provides the following options:

o Azure App Service. Integrate and manage web and mobile app solutions by using:
 Web Apps. Deploy Windows-based web apps to the cloud.
 Web App for Containers. Deploy Linux container-based web apps to the cloud.
 Mobile Apps. Develop and provision highly scalable, globally available mobile apps.
 API Apps. Provide building blocks for integrating and building new apps.
o Azure Media Services. Deliver multimedia content, such as video and audio.

o Azure Content Delivery Network. Speed up delivery of web content to users throughout the
world.

o Azure Search. Provide a fully managed search service.

o Azure Notification Hubs. Implement push notifications for apps and services.

• Networking, which provides the following options:


o Azure Virtual Network. Connect and segment the cloud infrastructure components.

o Azure Load Balancer. Implement automatically scalable transport-layer and network-layer load
balancing.
o Azure Application Gateway. Build application-layer load balancing, with support for such features
as Secure Sockets Layer (SSL) offloading, cookie affinity, and URL-based routing.
o Azure VPN Gateway. Create network connections between Azure and on-premises networks over
the internet.

o Azure DNS. Host and manage your DNS domains and records for use with Azure services.

o Azure Traffic Manager. Configure global load balancing based on Domain Name System (DNS).

o Azure ExpressRoute. Extend your on-premises network to Azure and Microsoft cloud services
through a dedicated private connection.
o Azure Distributed Denial of Service (DDoS) protection. Built-in service protecting your cloud
services from DDoS attacks.

• Storage, which provides the following options:

o Azure Storage. Store data in files, binary large objects (BLOBs), tables, and queues.

o Microsoft Azure StorSimple. Provision a multitier storage solution that provides cloud hosting for
on-premises data.

o Data Lake Store. Create hyperscale repositories for big data analytics.

o Azure Backup. Provide retention and recovery by backing up your on-premises and cloud-based
Windows and Linux systems to Azure.
o Azure Site Recovery. Design and implement disaster-recovery solutions for failover to a
secondary on-premises datacenter or to Azure.

• Databases, which provides the following options:

o Azure SQL Database. Implement relational databases for your apps without having to provision
and maintain a database server.

o Azure Database for MySQL. Implement managed MySQL databases.

o Azure Database for PostgreSQL. Implement managed PostgreSQL databases.


MCT USE ONLY. STUDENT USE PROHIBITED
1-16 Introduction to Microsoft Azure

o Azure SQL Data Warehouse. Provision a data warehouse as a service.

o SQL Server Stretch Database. Automatically extends on-premises SQL Server databases to Azure.

o Azure CosmosDB. Implement a globally distributed, schema-agnostic, multimodel data store.

o Azure Data Factory. Create data pipelines by using data storage, data-processing services, and
data movement.

o Azure Redis Cache. Implement high-performance caching solutions for your apps.

• Analytics, AI, and Machine Learning, which provides the following options:

o HDInsight. Provision Apache Hadoop clusters in the cloud.

o Azure Machine Learning. Run predictive analytics and forecasting based on existing data sets.

o Azure Data Lake Analytics. Run large-scale data-analysis jobs.

o Azure Databricks. Implement Apache Spark-based analytics solutions.


o Azure Analysis Services. Deploy a managed, enterprise-grade analytics platform.

o Azure Event Hubs. Collect telemetry data from connected devices and apps.

o Azure Bot Service. Run an intelligent, autoscaling, serverless bot service.

o Cognitive Services. Incorporate smart API capabilities into your apps.

• Internet of Things (IoT), which provides the following options:


o Azure IoT Suite, Azure IoT Hub, and Azure IoT Edge. Facilitate processing massive amounts of
telemetry data that connected devices and apps generate.

o Azure Stream Analytics. Process real-time data from connected devices and apps.

• Hybrid Integration, which provides the following options:

o Azure Service Bus. Connect apps across on-premises and cloud environments.

o The Logic Apps feature of Azure App Service. Automate running business processes and
workflows.

o Event Grid. Implement reliable delivery of a large volume of events.

o API Management. Publish and manage APIs.

• Identity and Access Management, which provides the following options:

o Azure Key Vault. Store and manage cryptographic artifacts, such as keys and passwords.

o Azure Active Directory. Integrate your on-premises Active Directory Domain Services (AD DS)
with the cloud-based Identity and Access Management solution, and provide single sign-on
(SSO) capabilities and multi-factor authentication for cloud-based and on-premises applications
and services.

o Azure Multi-Factor Authentication. Implement additional security measures in your apps to verify
user identity.

o Azure Active Directory Domain Services (Azure AD DS). Deploy managed domain controllers in
the cloud.
o Azure Active Directory B2C. Provide scalable identity and access management solutions for
customer-facing apps.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-17

• Developer Services, which provides the following options:

o Azure Application Insights. Provide cloud-based analytics and diagnostics of app usage.

o Azure DevTest Labs. Create, monitor, and manage virtual machines in a dedicated test
environment.

• Management, which provides the following options:


o Azure Policy. Enforce governance across all your Azure resources.

o Cost Management. Gain visibility into the cost of your resources and optimize their usage.

o Azure Monitor. Simplify and enhance monitoring of Azure resources.

o Azure Automation. Automate long-running, frequently repeating, and time-consuming tasks.

o Azure Scheduler. Run tasks according to custom-defined schedules.


o Azure Log Analytics. Build operational intelligence by using data collected from your cloud and
on-premises environments.

o Azure Security Center. Access all security-related information across hybrid environments from a
single monitoring and management interface.
o Azure Advisor. Optimize your Azure environment by following the Microsoft best practices based
on telemetry data representing your resource usage.

o Azure Network Watcher. Monitor and diagnose networking functionality and performance.

Note: Microsoft is continually improving Azure and adding new services on a regular basis.

Additional Reading: For an up-to-date list of Azure services, refer to the “Products”
section at: https://azure.microsoft.com/en-us/

Understanding Azure compute-hosting options


Azure includes several options to provide apps
and compute-based services from the cloud. These
options include:
• Azure App Service and App Service
Environment

• Cloud Services

• Service Fabric

• Virtual Machines

• Containers

• Azure Container Service (AKS)

• Functions
MCT USE ONLY. STUDENT USE PROHIBITED
1-18 Introduction to Microsoft Azure

App Service and App Service Environment


You can use App Service to quickly provision and create web, mobile, logic, or API apps in Azure. App
Service is a PaaS solution, so the platform automatically provisions and manages the underlying
infrastructure, the virtual machines, their operating systems, and the web server software. You can create
App Service solutions by using Microsoft ASP.NET, PHP, Node.js, Python, and, with Azure Web Apps on
Linux, Ruby. Web apps that use App Service can integrate with other Azure services, including SQL
Database, Service Bus, Storage, and Azure Active Directory. By using multiple copies of an app hosted on
separate virtual machines, you can rapidly scale App Service–based apps. You can publish code for App
Service apps by using the Microsoft Web Deployment Tool (Web Deploy), Microsoft Visual Studio, Git,
GitHub, File Transfer Protocol (FTP), Bitbucket, CodePlex, Mercurial, Dropbox, Microsoft Team Foundation
Server, and the cloud-based Visual Studio Team Services. For the most demanding workloads, you can use
App Service Environment, which allows you to create a multitier, dedicated environment capable of
hosting web apps, mobile apps, and API apps. App Service Environment delivers an extra performance
advantage by supporting direct virtual network connectivity.

Cloud Services
Azure Cloud Services offers multitier scalability for Windows-based web apps and greater control over the
hosting environment. When using Azure Cloud Services, you can connect to your virtual machines and
interactively perform management tasks such as registry modifications and Windows Installer–based
installations. You typically use Azure Cloud Services to deploy more complex solutions than an App
Service can provide. Azure Cloud Services is best suited for:

• Multitiered web apps.

• Web apps that require a highly scalable, high-performance environment.


• Web apps that have additional application dependencies or require minor operating system
modifications.

Virtual Machines
Of the available compute options, Azure VMs provide the greatest flexibility and control. As an IaaS
solution, Azure VMs operate like Microsoft Hyper-V virtual machines on Windows Server 2016. You have
complete control over the virtual machine at the operating system level, but, as a result, you are also
responsible for maintaining that operating system, including installing updates and backups. Unlike with
Web Apps or Cloud Services, you can use custom operating system images. Azure VMs are best suited for:

• Highly customized apps that have complex infrastructure or operating system requirements.
• Hosting Windows Server or Linux apps and infrastructure services, such as AD DS, DNS, or a database
management system (DBMS).

Service Fabric
Service Fabric is a cloud-based platform for developing, provisioning, and managing distributed, highly
scalable, and highly available services and applications. Its unique approach to service and application
architecture involves dividing their functionality into individual components called microservices. Common
examples of such microservices include shopping carts or user profiles of commercial websites and
queues, gateways, and caches that provide infrastructure services. Multiple instances of these
microservices run concurrently on a cluster of virtual machines.

This is similar to the multitier architecture of Cloud Services, which supports independent scaling of web
and worker tiers. However, Service Fabric operates on a much more granular level, as the term
microservices suggests. This allows a more efficient resource utilization and support for scaling to
thousands of virtual machines. Additionally, it allows developers to introduce gradual changes in the code
of individual application components without having to upgrade the entire application.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-19

Another feature that distinguishes Service Fabric from traditional PaaS services is support for both
stateless and stateful components. Cloud Services are stateless by design. To save the state information,
they must rely on other services, such as Azure Storage or Azure SQL Database. Service Fabric, on the
other hand, offers built-in support for maintaining state information. This minimizes or even eliminates
the need for back-end storage. It also decreases the latency when accessing application data.

Containers
Containers are the next stage in virtualizing computing resources. Initially, virtualization reduced the
constraints of physical hardware. It enabled running multiple isolated instances of operating systems
concurrently on the same physical hardware. Container-based virtualization virtualizes the operating
system, allowing you to run multiple applications within the same operating system instance while
maintaining isolation between them. Containers within a virtual machine provide functionality similar to
that of virtual machines on a physical server. However, there are some important differences between
virtual machines and containers, as listed in the following table.

Feature Virtual machines Containers

Isolation Built into the hypervisor Relies on operating system support.


mechanism

Required amount Includes operating system and Includes requirements for the
of memory app requirements containerized apps only.

Startup time Includes operating system boot Includes only start of apps and app
and start of services, apps, and dependencies. The operating system is
app dependencies already running.

Portability Portable, but the image is More portable because the image includes
larger because it includes the only apps and their dependencies.
operating system

Image automation Depends on the operating Based on the container registry.


system and apps

Compared with virtual machines, containers offer several benefits, including:

• Increased speed for developing and sharing application code.

• An improved lifecycle for testing applications.


• An improved deployment process for applications.

• An increase in the density of your workloads, resulting in improved resource utilization.

At the time of authoring this course, the most popular containerization technology is available from
Docker. Docker uses Linux built-in support for containers. Windows Server 2016 and Windows 10
introduced support for Docker containers on the Windows operating system platform.

Azure Container Service (AKS)


Azure Container Service allows you to administer clusters of multiple hosts running containerized apps.
AKS manages the provisioning of cloud infrastructure components, including Azure VMs, virtual networks,
and load balancers. Additionally, it enables you to manage and scale containerized apps to tens of
thousands of containers via integration with the Kubernetes orchestration engine.

Note: Module 7, “Implementing containers in Azure” covers containers and AKS in detail.
MCT USE ONLY. STUDENT USE PROHIBITED
1-20 Introduction to Microsoft Azure

Functions
Functions provide a convenient method of running custom code in Azure by eliminating any
infrastructure considerations. To implement functions, customers must provide their code written in C#,
F#, Node.js, Python, or PHP and specify a trigger that will initiate code execution. The Azure platform
handles the provisioning and scaling of underlying compute resources, dynamically adjusting to changes
in conditions that triggered the function execution. The charges reflect only the time during which the
code is running.

Functions support integration with a wide range of services, including Azure Storage, Mobile Apps,
Notification Hubs, Event Hubs, and cloud-based and on-premises resident instances of Service Bus. These
services can serve as triggers and provide input or output for functions.

Azure deployment models


Azure supports two deployment models–Azure
Resource Manager and classic. The deployment
model you choose determines how you provision
and manage Azure resources. It also affects the
properties and methods that these resources
support and the actions that you can apply to
them.

The classic (or Service Management, as it was


originally called) deployment model was the
primary method for provisioning Azure services.
The model had a corresponding API, which was
available not only via programming means but
also through scripting and a web-based portal.

As Microsoft cloud technologies evolved and matured, the original deployment model underwent a major
redesign. Its successor, Azure Resource Manager, introduced an innovative approach to administering
Azure services, focusing on the concepts of resources and resource groups. Resources represented
individual building blocks of Azure-based solutions, and resource groups provided a way to group these
resources into logical containers.

Azure Resource Manager has its own API, which is available through programming and scripting methods.
Microsoft also developed a new web-based portal, the Azure portal, which provides access to both Azure
Resource Manager and classic resources.

Note: The classic portal was discontinued in January 2018.

Note: You will learn more about Azure Resource Manager later in this module.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-21

Azure management tools


You can use several different methods to manage
an Azure environment. While using programming
or calling REST API offers the most functionality
and flexibility, both approaches require
development skills. Fortunately, there are simpler
ways to carry out majority of management tasks.
The following list summarizes the available
choices:

• The Azure portal accessible from


https://portal.azure.com. You can use it to
administer Azure from most web browsers.

• Azure PowerShell. You can use open-source


Azure PowerShell modules to manage your Azure environment from command line and via custom
scripts. Azure PowerShell modules are available for the Windows, Linux, and Mac OS platforms. You
can find downloadable installation files on GitHub. Alternatively, you can perform installation via
PowerShellGet, which downloads the modules automatically from the PowerShell Gallery.

• Azure CLI. The Azure CLI is an open-source command line and scripting tool that provides Azure
management capabilities equivalent to those of Azure PowerShell. Just as with Azure PowerShell, its
source code and installation files for the Windows, Linux, and Mac OS platforms are available from
GitHub. Azure CLI integrates closely with Bash shell.
• Azure Cloud Shell. Azure Cloud Shell provides the ability to run Azure PowerShell cmdlets and CLI
commands directly from within the interface of the Azure portal.

• Visual Studio. You can use Azure SDK to manage Azure resources from the Visual Studio integrated
development environment (IDE). Azure Tools, which are part of Azure SDK, provide the Cloud
Explorer window and an extension to the Server Explorer window within Visual Studio. This enables
you to work with Azure resources without relying on programming methods.
• Visual Studio Code. You can use Azure Extension Pack from Visual Studio Marketplace to extend the
functionality of Visual Studio Code, which enables you to manage a variety of Azure resources from
its interface.

Check Your Knowledge


Question

Which of the following services are not available from Azure Marketplace?

Select the correct answer.

Virtual Machines

Web Apps

Storage Spaces

Container Service

DNS
MCT USE ONLY. STUDENT USE PROHIBITED
1-22 Introduction to Microsoft Azure

Lesson 3
Managing Azure with the Azure portals
You can provision and manage Azure subscriptions and resources by using web-based portals. The portals
serve as the primary administrative interface for most Azure customers. Being familiar with their
navigational features and their functionality will benefit your productivity and simplify your administrative
tasks.

Lesson Objectives
After completing this lesson, you will be able to:

• Explain the Azure portal.

• Describe how to manage subscriptions with the Azure portal and the Azure Account Center.

• Use the Azure portals to manage Azure.

Using the Azure portal


The Azure portal, at https://portal.azure.com,
provides web browser-based administration of
Azure resources. The portal simplifies most
administrative tasks in Azure.

Portal elements and concepts


The Azure portal graphical interface contains the
following elements:

• Dashboard. This customizable webpage is the


entry point into your Azure environment. You
can customize it by populating it with
resizable tiles that represent shortcuts to
Azure resources and other items accessible via the portal. By default, the dashboard includes several
precreated tiles, including a global Azure service health tile, a tile providing a shortcut to a list of all
provisioned resources, and the Marketplace tile. You can create multiple dashboards, switching
between them based on your needs, and sharing them with others.

• Blades. Blades are scrollable panes in which you can view and configure details of a selected item. As
you select items in the current blade, new blades open on the right side of it, automatically scrolling
your current view horizontally in the same direction. You can maximize and minimize blades to
optimize screen space and simplify navigation.

• Hub menu. The hub menu is a customizable, vertical bar on the left side of the portal. It contains the
Create a resource and All services entries. The Create a resource entry serves as a starting point for
creating new resources in your Azure environment. Service provisioning occurs asynchronously. You
can monitor the provisioning status by clicking the notification (bell) icon in the upper part of the
portal page. The All services entry allows you to explore existing services based on the service type
or their names.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-23

Other navigational features that enhance user experience include the:

• Microsoft Azure label in the upper-left corner of the portal, which displays the dashboard.

• Search resources text box in the toolbar at the top of the portal interface, which includes a listing of
recently accessed resources, in addition to providing search capabilities.
• Support for keyboard shortcuts, a list of which you can display by accessing the Help drop-down
menu in the upper-right corner of the portal.

The Azure portal supports deployment and management of both Azure Resource Manager and classic
resources. You can easily distinguish between them since the portal includes the word “classic” in the
interface elements that reference classic resources. For example, the All services menu contains both
Virtual machines and Virtual machines (classic) entries.

Managing account subscriptions with the Azure portals


As the Account Administrator, you can manage
most Azure subscription settings and view billing
data from the Azure portal. The Billing blade
allows you to view the contact information, billing
address, payment methods, and invoices. The
Overview pane of this blade provides access to
billing history and subscription costs. It also
displays a list of subscriptions to which you have
Account Administrator privileges. From the list of
subscriptions listed on the Billing blade, you can
navigate to their respective blades. Alternatively,
you can also access the list of subscriptions from
the Subscriptions blade. Either of these methods allows you to view charts that summarize the cost by
resource type and burn rate on the subscription level. The Cost analysis blade provides detailed charges
for individual resources.

To manage subscription payment methods, navigate to the Azure Account Center at


https://account.azure.com/subscriptions (an Azure account is required). In the Azure Account Center, from
the subscriptions page, you can also access the following options:

• Download usage details

• Contact Microsoft support

• Edit subscription details

• Change subscription address

• View partner information

• Cancel your subscription

Note: Customers with an Enterprise Agreement with Microsoft have access to the Azure
Enterprise Portal, which simplifies management of multiple accounts and subscriptions.

Additional Reading: For more information regarding the Azure Enterprise Portal, refer to:
http://aka.ms/V91c9h
MCT USE ONLY. STUDENT USE PROHIBITED
1-24 Introduction to Microsoft Azure

Additional Reading: Rather than relying on the Azure portals, you can calculate usage
data programmatically by using the Azure Resource Usage API. Similarly, you can use Azure
Resource RateCard API to obtain estimated pricing information for Azure resources. For more
information, refer to: https://aka.ms/ab675f

Demonstration: Using the Azure portals


In this demonstration, you will see how to:

• Use the new Azure portal.

• Use the Azure Account Center.

Question: Which features of the Azure portal do you find most useful?
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-25

Lesson 4
Managing Azure with PowerShell
The Azure portals provide a graphical user interface (GUI) for managing Azure subscriptions and services.
In many cases, they are the primary management tools for service provisioning and operations. However,
many organizations want to automate their IT processes by creating reusable scripts or by combining
Azure resource management with the management of other network and infrastructure services.
PowerShell provides a scripting platform for managing a wide range of environments, including Azure.
This lesson explores how you can use Windows PowerShell to connect to an Azure subscription and to
provision and manage Azure services.

Lesson Objectives
After completing this lesson, you will be able to:

• Identify the PowerShell modules for managing Azure.


• Explain the differences between Azure AD Authentication and certificate authentication.

• Distinguish between the PowerShell cmdlets used for the classic deployment model and for the Azure
Resource Manager deployment model.
• Use PowerShell to manage Azure.

Azure PowerShell modules


The primary strength of PowerShell is its
extensibility, which relies on its ability to
dynamically load software modules that contain
cmdlets and functions. You can run these
functions and cmdlets interactively from the
Windows PowerShell console prompt and the
Windows PowerShell Integrated Scripting
Environment (Windows PowerShell ISE) console
pane. Alternatively, you can incorporate them into
custom scripts. Most management tasks that
target Microsoft Azure resources rely on Azure
PowerShell modules.

Azure PowerShell
To manage Azure resources by using Windows PowerShell, you first must install the Azure PowerShell
modules that provide this functionality. In this course, you will work mainly with the AzureRM modules,
which include cmdlets that implement features of Azure Resource Manager resource providers. For
example, cmdlets of the Compute provider, which facilitates the deployment and management of Azure
VMs, reside in the AzureRM.Compute module.

In some cases, deploying and managing Azure resources and services might require using other modules.
For example, to work with classic resources, you must use the Azure PowerShell Service Management
module called Azure. Similarly, there are separate modules that you can use to manage Azure AD, Azure
Information Protection, Azure Service Fabric, and Azure ElasticDB, for example.

Additional Reading: For the list of Azure PowerShell modules, refer to: “PowerShell
Module Browser” at: https://aka.ms/urrgkq
MCT USE ONLY. STUDENT USE PROHIBITED
1-26 Introduction to Microsoft Azure

Azure PowerShell is managed as an open-source project, with the repository hosted on GitHub at
https://aka.ms/gaoe3s. You can install and use Azure PowerShell on Windows, Linux, and Mac OS.

The three primary methods of installing the latest versions of the Azure PowerShell modules are:

• The Web Platform Installer (Web PI). This installation method is available directly from the Azure
Downloads page. It simplifies the setup process by relying on Web PI capabilities, which automatically
deploys and configures all prerequisites and installs the most recent version of the modules.

Additional Reading: For more information, refer to the Microsoft Azure “Downloads” page
at: https://aka.ms/vgz7tb

• The PowerShell Gallery. This method relies on the capabilities built into the PowerShellGet module,
which facilitates discovery, installation, and updates of a variety of PowerShell artifacts, including
other Windows PowerShell modules. PowerShellGet relies on the functionality built into Windows
Management Framework 5.1, which is part of the operating system, starting with Windows 10 and
Windows Server 2016. The same version of Windows Management Framework is also available at
https://aka.ms/r3meci. You can download and install it on any supported version of Windows, starting
with Windows 7 Service Pack 1 and Windows Server 2008 R2. Note, however, that this will
automatically upgrade Windows PowerShell to the matching version. If you want to enable the
PowerShellGet functionality on systems running Windows PowerShell 3.0 or Windows PowerShell
4.0, you must install the PackageManagement module available at https://aka.ms/xdrgnc.

To perform the installation based on PowerShellGet, run the Install-Module cmdlet from an elevated
session within the Windows PowerShell console or from the Windows PowerShell ISE console pane. To
install the Azure PowerShell modules from the PowerShell Gallery, run the following commands at the
Windows PowerShell command prompt:

Install-Module AzureRM
Install-Module Azure

Additional Reading: For more information, refer to: “Windows Management Framework
5.1” at: https://aka.ms/r3meci

• Microsoft Windows Installer (MSI) packages. This method allows you to install the current or any
previously released version of Azure PowerShell by using MSI packages available on GitHub. The
installation will automatically remove any existing Azure PowerShell modules.

Additional Reading: For more information, refer to: “Azure/azure-powershell” at:


http://aka.ms/Vep7fj

Note: Web PI installs Azure PowerShell modules within the %ProgramFiles%


\Microsoft SDKs\Azure\PowerShell directory structure. PowerShell Gallery–based installations
use the %ProgramFiles%\WindowsPowerShell\Modules version-specific directory structure.
MSI packages also install into %ProgramFiles%\WindowsPowerShell\Modules; however, they
do not use version-specific subfolders. PowerShell Gallery–based installation allows you to install
multiple versions of the Azure PowerShell module on the same operating system by supporting
the –RequiredVersion parameter of the Import-Module cmdlet. Each installation method
automatically updates the $env:PSModulePath variable.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-27

Azure AD module for Windows PowerShell


If you plan to manage users, groups, and other aspects of Azure AD from Windows PowerShell, you
should install the Azure Active Directory PowerShell for the Graph module. The module is available from
the PowerShell Gallery and you can install it by running the following cmdlet from a Windows PowerShell
prompt:

Install-Module -Name AzureAD

Additional Reading: For more information, refer to: “Azure AD” at: https://aka.ms/Vz20pp

Alternatively, you can use an earlier version of the Azure ActiveDirectory (MSOnline) module. At the time
of writing this course, this module offers some extra functionality not yet implemented in Azure Active
Directory PowerShell for Graph. This module is also available from the PowerShell Gallery. To install it, run
the following cmdlet from a Windows PowerShell prompt:

Install-Module -Name MSOnline

Additional Reading: For more information, refer to “Azure ActiveDirectory (MSOnline)” at


https://aka.ms/rqcbd9.

Azure Automation Authoring Toolkit


You can use the Azure Automation service to run Windows PowerShell workflows and scripts as runbooks
directly in Azure, either on demand or based on a schedule. While it is possible to develop Azure
Automation runbooks directly in the Azure portal, you can also use the Windows PowerShell ISE for this
purpose. To simplify the process of developing runbooks in Windows PowerShell ISE, install the Azure
Automation Authoring Toolkit and its ISE add-on from the PowerShell Gallery by running the following
cmdlets:

Install-Module AzureAutomationAuthoringToolkit -Scope CurrentUser


Install-AzureAutomationIseAddOn

Authenticating to Azure by using Windows PowerShell


After you install the Azure PowerShell module, you
must first authenticate successfully to access your
Azure subscription. There are two basic
authentication methods: Azure AD Authentication
and certificate-based authentication.

Azure AD Authentication
You can use Azure AD Authentication to access an
Azure subscription by using one of the following
types of credentials:
• A Microsoft account

• A Work or School account

• An Azure AD service principal


MCT USE ONLY. STUDENT USE PROHIBITED
1-28 Introduction to Microsoft Azure

An Azure AD service principal is an identity that you can associate with an application or a script that you
want to execute in its own, dedicated security context. An ApplicationId attribute uniquely identifies each
service principal. You can configure a service principal to authenticate either by using either a password or
a certificate.
To authenticate when using the Azure Resource Manager PowerShell module, use the Add-
AzureRmAccount cmdlet. This triggers an interactive sign-in, displaying a browser window in which you
must enter valid Azure AD credentials. Azure AD Authentication is token-based, and after you sign in, the
credentials associated with the Windows PowerShell session persist until the authentication token expires.

Additional Reading: The expiration time for an Azure AD Authentication token depends
on several factors. For more information, refer to: “Configurable token lifetimes in Azure Active
Directory (Public Preview)” at: https://aka.ms/dyy43e

After you authenticate, you can use the Get-AzureRmContext cmdlet to view the user account, the
corresponding Azure AD tenant, and the Azure subscriptions associated with the current Windows
PowerShell session. The Get-AzureRmSubscription cmdlet provides a subscription-specific subset of
this information. If you have multiple subscriptions, you can set the current subscription by using the
Set-AzureRmContext cmdlet with the name or ID of the subscription that you want to use. To
save the current authentication information to reuse it in another Windows PowerShell session, use
Save-AzureRmProfile. Then you can retrieve the authentication information later by running
Select-AzureRmProfile.

Additional Reading: For information, refer to: “Using AAD Credentials with Azure
PowerShell Cmdlets” at: https://aka.ms/kcsefe

Certificate-based authentication
Most tools that you use to manage Azure support Azure AD Authentication. Generally, we recommend
using Azure AD Authentication as the primary authentication mechanism. However, in some cases, it
might be more appropriate to authenticate by using certificates. For example, this allows you to run your
scripts unattended by eliminating interactive authentication prompts.

How you implement certificate-based authentication depends on whether you intend to interact with
Azure Resource Manager or classic resources.

With Azure Resource Manager, the process involves the following steps:
1. Obtaining a certificate. You can use either a self-signed certificate or a certificate issued by a
certificate authority.

2. Creating an Azure AD service principal and associating it with the certificate.


3. Granting the service principal appropriate permissions to resources within the Azure subscription. The
level of permissions should reflect the scope of tasks that the script or application must be able to
carry out.

Additional Reading: For more information, refer to: “Use Azure PowerShell to create a
service principal with a certificate” at: http://aka.ms/Yym3a7

When implementing certificate-based authentication in the classic deployment model, you can also use
either a self-signed certificate or a certificate issued by a certification authority (CA). To import the
certificate into your Azure subscription, you can use the Azure portal.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-29

Additional Reading: For more information, refer to: “Upload an Azure Service
Management Certificate” at: https://aka.ms/Gxgwho

In addition, store the certificate in the personal certificate store of the user who needs to access the Azure
subscription.

To authenticate by using the certificate in Windows PowerShell, you can use the Set-AzureSubscription
cmdlet, specifying the subscription name, subscription ID, and certificate. You can obtain the subscription
ID from the Azure portal, and you can reference the certificate in Windows PowerShell by using the
Get-Item cmdlet.
The following code example shows how to set the current subscription by using a specific certificate.

Using a specific certificate


$subName = "<the subscription name">
$subID = "<copy the subscription ID from the Azure portal>"
$thumbprint = "<the thumbprint of the certificate you want to use>"
$cert = Get-Item cert:\\currentuser\my\$thumbprint
Set-AzureSubscription -SubscriptionName $subName, -SubscriptionId $subId -Certificate
$cert

To obtain the certificate thumbprint, you can either view the certificate in Certificate Manager or use the
Windows PowerShell command Get-Item cert:\\currentuser\my\* to obtain a list of all the personal
certificates and their thumbprints.

Azure PowerShell cmdlets for Azure classic deployment model and Azure
Resource Manager
After you authenticate from a Windows
PowerShell session to your Azure subscription, you
can use Azure PowerShell cmdlets to view,
provision, and manage Azure resources. The
cmdlets you will use depend on the deployment
model used to provision the resources. The Azure
classic deployment model uses the Azure module
for PowerShell, whereas the Azure Resource
Manager model uses the AzureRM module for
PowerShell.

You can easily distinguish between Azure Resource


Manager and classic cmdlets because they use
slightly different formats. Both types of cmdlets use the verb-noun syntax, but while the noun portions of
Azure Resource Manager cmdlets start with AzureRm, the Service Management cmdlets include only
Azure (without the Rm string). For example, to deploy a new Azure virtual machine by using Azure
Resource Manager, you run the New-AzureRmVM cmdlet. To accomplish the same task in the classic
deployment model, you use the New-AzureVM cmdlet. In addition, the noun portion might differ due to
service name changes that the Azure Resource Manager model introduced.
MCT USE ONLY. STUDENT USE PROHIBITED
1-30 Introduction to Microsoft Azure

The following table illustrates some of the differences between the two sets of PowerShell cmdlets.

Functionality or Azure classic deployment Azure Resource Manager


command model deployment model

Sign in to Azure Add-AzureAccount Add-AzureRmAccount

Create a virtual New-AzureVM New-AzureRmVM


machine

Create a web app New-AzureWebsite New-AzureRmWebapp

Note: Login-AzureRmAccount is an alias of Add-AzureRmAccount.

Demonstration: Using Azure PowerShell


In this demonstration, you will see how to use Azure PowerShell to:

• Create a resource group.

• Create a storage account.

• Delete a resource group with its resources.

Question: How can you identify whether a cmdlet is part of the classic or Azure Resource
Manager module?
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-31

Lesson 5
Managing Azure with Azure CLI
While the Azure PowerShell module is available on Linux and Mac OS, in addition to Windows, its primary
users tend to work mostly with Microsoft technologies. Those operating in the open-source environments
favor scripting tools that integrate with UNIX and Linux shells. Azure CLI provides this type of integration.
In this lesson, you will learn about the basic characteristics of Azure CLI, its installation methods, and a few
basic commands that allow you to access your Azure subscription.

Lesson Objectives
After completing this lesson, you will be able to:

• Distinguish between Azure CLI 1.0 and Azure CLI 2.0.

• Install Azure CLI.

• Use Azure CLI to access an Azure subscription.

Azure CLI versions


The Azure CLI provides a command-line, shell-
based interface that you can use to interact with
your Azure subscriptions. The Azure CLI offers a
similar set of features as the Azure PowerShell
modules. Its primary advantage is close integration
with shell scripting, including support for popular
tools such as grep, awk, sed, jq, and cut, allowing
Linux administrators to leverage their existing skills
when managing Azure resources.

At the time of authoring this course, there are two


versions of Azure CLI:

• Azure CLI 1.0 (sometimes referred to as Azure


Cross-Platform Command-Line Interface or XPlat-CLI). This version is written in Node.js to provide
cross-platform support. Its open-source repository resides at https://aka.ms/qxp7e4.

• Azure CLI 2.0. This version, written in Python, offers several improvements and new features
compared to its predecessor. These features include the ability to build pipelines consisting of Azure
CLI commands and shell tools, tab completion for commands and parameter names, support for
asynchronous command execution, and enhanced in-tool help. Its open-source repository resides at
https://aka.ms/jddotl.

Azure CLI 1.0 supports both the classic and Azure Resource Manager deployment models. Azure CLI 2.0
supports only the Azure Resource Manager deployment model. If you still manage classic resources but
want to leverage the Azure CLI 2.0 features, you can run both versions side by side. In fact, both CLIs share
credentials that you provide and the Azure subscriptions that you select by default, simplifying your
management experience in a mixed environment. You can easily identify the version of a CLI command,
because Azure CLI 1.0 commands start with the keyword azure, while Azure CLI 2.0 commands start with
the keyword az.
MCT USE ONLY. STUDENT USE PROHIBITED
1-32 Introduction to Microsoft Azure

Both versions of Azure CLI are available on Windows, Linux, and Mac OS. You can install Azure CLI 2.0
directly on Windows or within a Bash environment on Windows. The second method offers a user
experience that is closest to running Azure CLI directly on Linux. This allows you to run most Linux scripts
without any modifications. Both Azure CLI 1.0 and Azure CLI 2.0 are also available in the Cloud Shell
environment accessible directly from the Azure portal.

Additional Reading: To implement the Bash environment on Windows in Windows 10 or


Windows Server 2016, install Windows Subsystem for Linux. For more information, refer to:
“Windows Subsystem for Linux Documentation” at: https://aka.ms/Kfk1qp

Note: In this course, you will be using Azure CLI 2.0, resorting to Azure CLI 1.0 only when
managing Azure classic resources.

Installing Azure CLI


The installation process for the Azure CLI depends
on its version and, to some extent, on the target
operating system. Because Azure CLI 1.0 was
developed by using Node.js, you must install
Node.js before installing Azure CLI 1.0. You can
obtain Node.js installers and binaries for Windows,
Linux, and Mac OS operating systems from
https://aka.ms/i7hw4o. Python is a prerequisite for
installing Azure CLI 2.0. Python installers are
available at https://aka.ms/wfus7d.

Installing Azure CLI 1.0


After you install Node.js, you can use the Node
package manager npm command-line tool to install the Azure CLI 1.0 package by running the following
command:

npm install –g azure-cli

You can also deploy a Docker container running Azure CLI 1.0 onto a Docker host. To do so, use the
docker command-line tool and run the following command:

docker run it microsoft/azure-cli

Alternatively, you can download precompiled installers from the Azure CLI 1.0 GitHub repository. The
installers are available for Windows, Linux, and Mac OS.

Additional Reading: For more information about installing Azure CLI 1.0, refer to: “Azure
Xplat-CLI for Windows, Mac and Linux” at: https://aka.ms/qxp7e4

Installing Azure CLI 2.0


You can use precompiled installers for Windows, Linux, and Mac OS to install Azure CLI 2.0. If you
implement Azure CLI 2.0 in a Bash environment on Windows, you can use the apt-get tool. You can use
the same tool when running Debian and Ubuntu Linux distributions. Both Linux and Mac OS also support
installation of Azure CLI 2.0 via the curl command referencing the http://aka.ms/InstallAzureCli URL.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-33

Additional Reading: For more information about installing Azure CLI 2.0, refer to: “Install
Azure CLI 2.0” at: https://aka.ms/ng96x1

The installation modifies the Path system environment variable. This allows you to run Azure CLI
commands directly from a command prompt window on Windows or command shell on Linux or Mac OS.

Using Azure CLI to access your Azure subscription


Installing Azure CLI gives you a set of tools to
manage Azure resources. However, just as with
Azure PowerShell modules, you must first
authenticate before accessing the Azure
subscription containing these resources. You can
do so by using a Microsoft account, a work or
school account, or a service principal that exists in
the Azure AD tenant associated with that
subscription.

To initiate the authentication process, run one of


following commands (depending on the Azure CLI
version) from a command shell or a command
prompt:

azure login
az login

In response, the shell will display a message prompting you to start a browser and browse to the Device
Login page at http://aka.ms/devicelogin. There you must enter the code provided as part of the shell
message. This step verifies the Azure CLI as the application publisher and allows you to enter your user
credentials to authenticate to the Azure subscription.

Azure AD Authentication is token-based, and after you sign in, the credentials associated with the Azure
CLI session persist until the authentication tokens expire.

Additional Reading: Just as Windows PowerShell, the expiration time for an Azure AD
authentication token depends on several factors. For more information, refer to: “Configurable
token lifetimes in Azure Active Directory (Public Preview)” at: https://aka.ms/dyy43e

After you authenticate, you can use the azure account list (Azure CLI 1.0) or az account list (Azure
CLI 2.0) command to view a list of subscriptions associated with your account. If you have multiple
subscriptions, you can specify the one you want to manage by using the azure account set or
az account set command, and providing either the subscription name or its ID.

Note: You can identify the subscription name and ID by reviewing the output of the azure
account list or az account list command.

Azure CLI 1.0 supports both Azure Resource Manager and classic deployment models but uses separate
modes for working with each. To switch between them, you must use the azure config mode command.
MCT USE ONLY. STUDENT USE PROHIBITED
1-34 Introduction to Microsoft Azure

To switch to the Azure Resource Manager mode, run the following command:

azure config mode arm

To switch to the classic deployment mode, run the following command:

azure config mode asm

Demonstration: Using Azure CLI


In this demonstration, you will see how to:

• Create a resource group.

• Create a storage account.

• Delete a resource group with its resources.

Question: What would you consider to be primary strengths of Azure CLI 2.0?
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-35

Lesson 6
Overview of Azure deployment models
As mentioned earlier in this module, there are two methods of deploying Azure resources. The traditional
approach, referred to as the Azure classic deployment model, relies on the Service Management API.
While classic deployments still exist, you should use the newer Azure Resource Manager deployment
model for any new deployments.
This lesson covers both deployment models, including the differences between them. This lesson also
discusses scenarios that involve interaction between their respective resources. However, this course
emphasizes the Azure Resource Manager model because this is the prevailing deployment methodology.

Lesson Objectives
After completing this lesson, you will be able to:

• Explain core concepts of Azure Resource Manager deployment model.


• Identify tasks involved in managing resources and resource groups.

• Describe the Azure Resource Manager deployment methodologies.

• Explain the basic structure of Azure Resource Manager templates.


• Describe the syntax of individual components of Azure Resource Manager templates.

• Deploy an Azure Quickstart template.

• Explain core concepts of the Azure classic deployment model.

Core concepts of Azure Resource Manager deployment model


The concept of the resource is fundamental in
Azure Resource Manager. A resource is an
elementary building block of services and
solutions that you deploy into Azure. You can
manage each resource by interacting with its
resource provider, which implements actions that
you invoke through an administrative interface,
such as the Azure portal, Azure PowerShell, Azure
CLI, or REST API.

Every resource exists in one, and only one,


resource group. A resource group is a logical
container that simplifies managing multiple
resources. Resources in the same resource group typically share the same lifecycle, although you have full
flexibility in choosing your own criteria for grouping resources. By using resource groups, you can manage
some aspects of resource configuration at the group level, rather than individually. For example, you can
delegate permissions, determine costs, and audit events for all resources within a group in a single step.
Additionally, when you no longer need the resources in a resource group, you can remove them by
deleting the group in which they reside.
Azure Resource Manager supports very granular delegation of administration based on the role-based
access control (RBAC) model. The delegation relies on predefined and custom-defined roles within the
Azure AD tenant associated with the Azure subscription. Each role represents a collection of actions and
the corresponding resources. For example, you can create a role that will grant the ability to stop and start
MCT USE ONLY. STUDENT USE PROHIBITED
1-36 Introduction to Microsoft Azure

an Azure virtual machine. Alternatively, you can use the predefined Virtual Machine Contributor role that
grants the ability to carry out a more extensive set of virtual machine management actions. Once you
create a new group or identify an existing one, you associate it with the scope of delegation, such as an
entire subscription, a resource group, or even an individual resource. Finally, you assign the role to a user,
group, or service principal in the Azure AD tenant associated with the Azure subscription hosting the
resources you manage.

Tagging is another feature of the Azure Resource Manager deployment model. Tags are custom labels
that you can assign to resources and resource groups. You can utilize this functionality to describe your
cloud environment. For example, you can specify the ownership of individual resources, assign them to
the appropriate cost center, or designate them as production, test, or development. Tags appear in the
billing data available to the Account Administrator. This simplifies identifying costs associated with tagged
resources for chargeback purposes.

Azure Resource Manager offers a distinct deployment methodology, not available in the classic
deployment model. This methodology leverages deployment templates. A template is a JavaScript Object
Notation (JSON)–formatted file that defines a collection of resources that you intend to create and
configure. During a deployment, you provide the template, specify its parameters, and specify the
resource group where the deployment should take place. Once the deployment completes, the target
resource group will contain the resources created and configured according to the template’s content.

Templates constitute an example of the declarative deployment model, which defines the desired end
state, rather than describing the deployment process. This is different from a script-based deployment,
which implements the imperative approach, explicitly dictating the sequence in which to provision
different resources. Templates rely on intelligence that is built into the Azure platform to carry out
deployment in the most optimal way, typically resulting in minimized deployment time.

Azure Resource Manager also includes support for policies and locks that enhance resource deployment
and management capabilities. Policies allow you to define conditions that, once evaluated as either true
or false, affect the outcome of a deployment. For example, you can prevent users from creating resources
without assigning to them a specific tag. You can also restrict the sizes of VMs that users can provision or
restrict locations to which users can deploy such VMs. The primary purpose of locks is to prevent
accidental modification or deletion of resources.

Note: You will learn more about policies and locks in Module 11, “Implementing Azure-
based management, monitoring, and automation.”

Managing resources and resource groups


Every Azure resource belongs to a resource group.
When provisioning a resource via the Azure portal,
you can create a new resource group or use an
existing resource group.

When you deploy a solution that consists of a few


resources working together, you should create a
dedicated resource group to help manage the
lifecycle of all the related assets. You can add or
remove additional resources from the resource
group as your solution evolves.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-37

Creating resource groups and adding resources to resource groups


The Azure Resource Manager Azure PowerShell module allows you to manage resource groups in your
Azure subscription. To create resource groups, use the New-AzureRmResourceGroup cmdlet. At that
point, you will be able to use resource type–specific cmdlets to create resources in that resource group.
You can also use a deployment template to add resources to a resource group. Azure CLI offers equivalent
capabilities with the az group create command and resource-specific commands.

Moving resources and resource groups


You can move resources between resource groups in the same or different Azure subscriptions. This might
be necessary for several reasons:

• A resource needs to be in a different logical grouping or Azure subscription.

• A resource does not share the same lifecycle with other resources that were in its group.

You should consider the following factors when moving a resource:

• You cannot change a resource’s location. After you create a resource, it must remain in the same
Azure region.
• You should use the latest version of the Azure PowerShell module if you are using it to move
resources.

• Both the source and destination resource groups are blocked for deletion while the move operation
takes place.

There are additional considerations when moving resources between subscriptions, including the
following:
• Both subscriptions must be associated with the same Azure AD tenant.

• You must move all dependent resources at the same time. For example, when moving a virtual
machine, you must include in the scope of the move the storage account hosting its virtual disk files
and the virtual network to which its network interface cards are attached.

• You must ensure that the target subscription is registered for the provider of each resource type that
you intend to move.

Additional Reading: For more information regarding registering resource providers, refer
to: “Resource providers and types” at https://aka.ms/mc7vuj

Additional Reading: Not every resource type supports the move operation. For more
information, refer to: “Move resources to new resource group or subscription” at:
http://aka.ms/Ry0sqz
MCT USE ONLY. STUDENT USE PROHIBITED
1-38 Introduction to Microsoft Azure

Azure Resource Manager deployment methodologies


In the past, deploying Azure-based solutions relied
on the imperative approach. The provisioning
process consisted of a sequence of steps, each
creating individual components of the solution.

Azure Resource Manager supports a new,


declarative deployment methodology, based on
Azure Resource Manager deployment templates. A
template is a JSON-formatted file that defines a
collection of resources that you intend to
provision together in the same resource group.
The resulting deployment populates the target
resource group according to the template’s
content.

While traditional deployment methods that rely on the GUI or scripting and programming languages are
still available, templates offer additional benefits. Like scripts, they facilitate deployment of
multicomponent solutions in an automated manner. However, unlike scripts, they do not explicitly specify
individual steps required to provision these solutions. Instead, they simply define their intended end state.
This way, they rely on the intelligence built into the Azure platform to deploy all necessary resources in
the most optimal way. This results in minimized deployment time and reduces the potential for errors. If
needed, you have the option to define dependencies between resources in order to control the resource-
provisioning sequence.
Deployment templates are ideal if you need to provision multiple solutions with the same general design.
For example, you can deploy the same template to separate resource groups, representing development,
test, quality assurance, and production environments. To account for any potential differences between
them, you can replace specific values in the template with parameters, and then assign values to these
parameters at the deployment time.

Templates are idempotent, which means that you can deploy them multiple times to the same resource
group with the same outcome. This is useful when you want to recreate an original deployment or
remediate any issues resulting from post-deployment changes.
Templates support VM extensions, which allow you to configure operating systems within Azure VMs as
part of their deployment. These extensions include configuration management services, such as
PowerShell Desired State Configuration, Chef, or Puppet.

Note: Visual Studio with Azure SDK and Visual Studio Code simplify authoring and editing
deployment templates.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-39

Introduction to Azure Resource Manager templates


An Azure Resource Manager template contains a
JSON-formatted definition of one or more Azure
resources, along with parameters and variables
that facilitate customizing their configuration.
When creating and working with resource
templates, you should consider:

• Which resources you are going to deploy.

• Where your resources will be located.


• Which version of the resource provider API
you will use.

• Whether there are dependencies between


resources.

• When you will specify values of resource properties. While you can include these values in the
template, it is generally preferable to specify them during deployment via corresponding parameters.

Understanding the structure of a resource template


A resource template consists of the following sections:

{
"$schema": "http://schema.management.azure.com/schemas/2015-01-
01/deploymentTemplate.json#",
"contentVersion": "",
"parameters": { },
"variables": { },
"resources": [ ],
"outputs": { }
}

The following table describes the sections in the code sample above.

Element name Required Description

$schema Yes This is a URL identifying the location of the


JSON schema file, which describes the template
syntax.

contentVersion Yes A custom value that you define to keep track of


changes to template content.

parameters No You can provide parameters during


deployment, either interactively or via a
parameter file, to customize properties of
deployed resources.

variables No Variables are typically used to convert values of


parameters to the format that is required to set
resource property values.

resources Yes These are resources that you want to create or


modify as the result of the deployment.

outputs No These values are returned by the deployment.


MCT USE ONLY. STUDENT USE PROHIBITED
1-40 Introduction to Microsoft Azure

The next topic discusses these sections in more detail.

The following code is an example of a complete template that deploys a web app and uses code in a .zip
file to provision the app:

{
"$schema": "http://schema.management.azure.com/schemas/2015-01-
01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"siteName": {
"type": "string"
},
"hostingPlanName": {
"type": "string"
},
"hostingPlanSku": {
"type": "string",
"allowedValues": [
"Free",
"Shared",
"Basic",
"Standard",
"Premium"
],
"defaultValue": "Free"
}
},
"resources": [
{
"apiVersion": "2016-09-01",
"type": "Microsoft.Web/serverfarms",
"name": "[parameters('hostingPlanName')]",
"location": "[resourceGroup().location]",
"properties": {
"name": "[parameters('hostingPlanName')]",
"sku": "[parameters('hostingPlanSku')]",
"workerSize": "0",
"numberOfWorkers": 1
}
},
{
"apiVersion": "2016-08-01",
"type": "Microsoft.Web/sites",
"name": "[parameters('siteName')]",
"location": "[resourceGroup().location]",
"tags": {
"environment": "test",
"team": "ARM"
},
"dependsOn": [
"[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]"
],
"properties": {
"name": "[parameters('siteName')]",
"serverFarm": "[parameters('hostingPlanName')]"
},
"resources": [
{
"apiVersion": "2016-08-01",
"type": "Extensions",
"name": "MSDeploy",
"dependsOn": [
"[resourceId('Microsoft.Web/sites', parameters('siteName'))]"
],
"properties": {
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-41

"packageUri":
"https://auxmktplceprod.blob.core.windows.net/packages/StarterSite-modified.zip",
"dbType": "None",
"connectionString": "",
"setParameters": {
"Application Path": "[parameters('siteName')]"
}
}
}
]
}
],
"outputs": {
"siteUri": {
"type": "string",
"value": "[concat('http://',reference(resourceId('Microsoft.Web/sites',
parameters('siteName'))).hostNames[0])]"
}
}
}

Additional Reading: For more information on Azure Resource Manager template


structure, refer to: “Understand the structure and syntax of Azure Resource Manager templates”
at: http://aka.ms/Yxslmx

Exploring the syntax of Azure Resource Manager templates


Azure Resource Manager template syntax can be
complex, especially in a large-scale deployment.
Each section of the template has its own structure
and syntax, and you can use numerous functions
and operators to define your deployment
configuration.

Understanding template sections


Template sections include parameters, variables,
resources, and outputs. This topic provides specific
information about each template section and the
type of code it contains.

Parameters
Parameters represent the values that you can specify when performing a deployment. With parameters,
you can customize the deployment process, which makes a template more flexible, accounting for
potential differences between target environments. For example, you might declare a parameter that
allows you to specify an Azure region. Without this parameter, you must specify the region directly in the
template, limiting its flexibility.

You can define each parameter by using any of the following elements.

Element name Required Description

parameterName Yes This is the parameter’s name.

type Yes This is the parameter type, such as a string, integer,


Boolean, object, or array.
MCT USE ONLY. STUDENT USE PROHIBITED
1-42 Introduction to Microsoft Azure

Element name Required Description

defaultValue No This value is assigned automatically to the parameter


if you do not assign one explicitly during deployment
time.

allowedValues No This is an array or list of values that the parameter


can contain.

minValue No This is the minimum value for integer parameters.

maxValue No This is the maximum value for integer parameters.

minLength No This is the minimum length for string and array


parameters.

maxLength No This is the maximum length for string and array


parameters.

Description No This is description of the parameter that appears


when deploying the template from the Azure portal.

The following code provides examples of parameters:

"parameters": {
"siteName": {
"type": "string",
"minLength": 2,
"maxLength": 60
},
"siteLocation": {
"type": "string",
"minLength": 2
},
"hostingPlanName": {
"type": "string"
},
"hostingPlanSku": {
"type": "string",
"allowedValues": [
"Free",
"Shared",
"Basic",
"Standard",
"Premium"
],
"defaultValue": "Free"

Variables
Variables contain values which are typically calculated based on values that you provide via parameters.
The following code provides examples of variables:

"variables": {
"environmentSettings": {
"test": {
"instancesSize": "Small",
"instancesCount": 1
},
"prod": {
"instancesSize": "Large",
"instancesCount": 4
}
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-43

},
"currentEnvironmentSettings":
"[variables('environmentSettings')[parameters('environmentName')]]",
"instancesSize": "[variables('currentEnvironmentSettings').instancesSize",
"instancesCount": "[variables('currentEnvironmentSettings').instancesCount"
}

Resources
The resources section includes a list of resources that the template will deploy, including their properties,
which might reference parameters and variables to set property values. Describing each resource can be
complex. Authoring this section of your template requires knowledge of the resource types that you are
deploying.

You define resources by using the elements in the following table.

Element name Required Description

apiVersion Yes This is the version of the REST API that the resource
provider will use to create the resource.

type Yes This is a string consisting of the resource provider name


and the resource type.

name Yes This is the resource name.

location No This is the name of the Azure region where the resource
will be deployed.

tags No These are tags that are associated with the resource.

dependsOn No This is a list of resources on which the current resource


depends. The dependsOn element and the resources
element determine the order in which resources are
deployed. If you do not include these elements in the
template, the resource providers determine the order of
deployment.

properties No These are resource-specific settings.

resources No These are child resources that depend on the current


resource for their functionality. You must include this
element and the dependsOn element to express the
parent-child relationship.

The following code contains an example of resources definition:

"resources": [
{
"apiVersion": "2016-09-01",
"type": "Microsoft.Web/serverfarms",
"name": "[parameters('hostingPlanName')]",
"location": "[resourceGroup().location]",
"properties": {
"name": "[parameters('hostingPlanName')]",
"sku": "[parameters('hostingPlanSku')]",
"workerSize": "0",
"numberOfWorkers": 1
}
},
{
"apiVersion": "2016-08-01",
MCT USE ONLY. STUDENT USE PROHIBITED
1-44 Introduction to Microsoft Azure

"type": "Microsoft.Web/sites",
"name": "[parameters('siteName')]",
"location": "[resourceGroup().location]",
"tags": {
"environment": "test",
"team": "ARM"
},
"dependsOn": [
"[resourceId('Microsoft.Web/serverfarms', parameters('hostingPlanName'))]"
],
"properties": {
"name": "[parameters('siteName')]",
"serverFarm": "[parameters('hostingPlanName')]"
},
"resources": [
{
"apiVersion": "2016-08-01",
"type": "Extensions",
"name": "MSDeploy",
"dependsOn": [
"[resourceId('Microsoft.Web/sites', parameters('siteName'))]"
],
"properties": {
"packageUri":
"https://auxmktplceprod.blob.core.windows.net/packages/StarterSite-modified.zip",
"dbType": "None",
"connectionString": "",
"setParameters": {
"Application Path": "[parameters('siteName')]"
}
}
}
]
}
]

Outputs
The outputs section allows you to specify values that the deployment returns. For example, the
deployment could return the uniform resource identifier (URI) value of a resource that was deployed in
the template. The following table describes the elements included in the outputs section of an Azure
Resource Manager template.

Element name Required Description

outputName Yes Name of the output value. This must be a valid


JavaScript identifier.

Type Yes Type of the output value. The types supported are the
same as those supported by parameters.

Value Yes Expression that provides the returned output value.

The following example shows a value that is returned in the Outputs section:

"outputs": {
"siteUri" : {
"type" : "string",
"value": "[concat('http://',reference(resourceId('Microsoft.Web/sites',
parameters('siteName'))).hostNames[0])]"
}
}
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-45

Additional Reading: For more information about Azure Resource Manager template
sections, refer to: “Understand the structure and syntax of Azure Resource Manager templates” at:
http://aka.ms/Yxslmx

Understanding template functions


The following table details the function types that you can use in a template to customize its content.

Function group Description

Numeric These functions work with integer variable types.

String These functions work with string variable types.

Array These functions work with arrays and array values.

Deployment value These functions retrieve values from sections of the template or values
related to deployment.

Resource These functions retrieve values related to resources.

The following list contains some examples of template functions:


• add(): This function returns the sum of two integers. For example, add(2,5) returns an integer value
of 7.
• concat(): This function combines two or more string or array values into a single string or array value.
For example, concat(‘Hello,’World’) returns a string value of ‘HelloWorld’.

• toLower(): This function converts all characters of a string to lower-case. For example,
toLower(‘Adatum’) returns a string value of ‘adatum’.
• parameters(): This function returns the value of a parameter that has been defined in the template.
For example, parameters(‘locName’) returns the value that the template user specified for the
locName parameter.

Additional Reading: For more information about Azure Resource Manager template
functions, refer to: “Azure Resource Manager template functions” at: http://aka.ms/Jcr7f7

Additional Reading: You can find hundreds of sample Azure Resource Manager templates
in the Azure Quickstart Templates repository on GitHub at: https://aka.ms/vvn2op
MCT USE ONLY. STUDENT USE PROHIBITED
1-46 Introduction to Microsoft Azure

Deploy Azure Resource Manager templates


Once you have an Azure Resource Manager
template, you can deploy all its resources by
running the New-
AzureRmResourceGroupDeployment Azure
PowerShell cmdlet. To reference the template file,
you use the -TemplateFile parameter. This will
deploy the resources defined in the template to
the resource group that you specify as the value of
the -ResourceGroupName parameter. You can
accomplish the same outcome by running the az
group deployment create Azure CLI command
with the -template-file and –resource_group
parameters. In either case, you should provide the values of the parameters specified in the template.
Alternatively, you might assign default values to these parameters directly within the template or
reference a parameter file that contains their values during deployment.

Note: You can also reference a URL of an existing template in an internet location by using
the -TemplateURI (Azure PowerShell) or -template_uri (Azure CLI) parameter.

To use Azure PowerShell and Azure CLI, you must be familiar with their syntax. In addition, you might
need to install the scripting engine, unless you use Azure Cloud Shell. The Custom deployment blade in
the Azure portal provides a convenient way of deploying Azure Resource Manager template–based
resources. To access it, select the Create a resource entry in the hub menu, and then select the Template
deployment option. From there, you can build your own template in the browser-based template editor,
choose one of the predefined templates, or load a GitHub QuickStart template. This last option uses the
GitHub repository, which hosts hundreds of ready-to-use templates.

Note: Every QuickStart template published on GitHub has a corresponding Deploy to


Azure link. When you click the link, it automatically redirects your web browser session to the
Azure portal and initiates deployment, prompting you only for the values of the required
parameters. The same GitHub page has also the Visualize link. When you click this link, it opens a
new browser window with the template content appearing in Azure Resource Manager Template
Visualizer (at http://aka.ms/Fw4rij). The Visualizer displays a diagram showing the resources
defined in the template, including relationships between them.

Additional Reading: For more information, refer to: “Azure Quickstart Templates” at:
http://aka.ms/Qgh9jn

Additional Reading: You can also author and deploy templates by using Visual Studio and
Visual Studio Code. For more information, refer to: “Creating and deploying Azure resource
groups through Visual Studio” at: https://aka.ms/nhzlop and to “Use Visual Studio Code
extension to create Azure Resource Manager template” at: https://aka.ms/W2105t
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-47

Demonstration: Viewing and deploying a GitHub Azure Quickstart


template
In this demonstration, you will see how to:

• Visualize an Azure Resource Manager template.

• Deploy an Azure Resource Manager template from GitHub.

Core concepts of the Azure classic deployment model


The Azure classic deployment model was originally
referred to as the Service Management model.
Introduced in 2009, this model served as the
primary method of deploying and managing
Azure services until the introduction of Azure
Resource Manager.

While there are numerous differences between the


two models, this topic focuses on IaaS-related
differences. In this context, the primary concept
that distinguishes Service Management from
Azure Resource Manager is cloud service. A cloud
service is a logical container hosting one or more
VMs. Besides enabling the VMs to communicate directly with each other, it also allows for access to them
over the internet. This is possible because each cloud service has a public IP address and a corresponding,
publicly resolvable DNS name.

Typically, to access a VM in the classic deployment model, you create an endpoint by modifying the cloud
service where this VM resides. Each VM exists within the boundaries of a cloud service. Cloud service also
allows for grouping VMs into availability sets, thereby providing resiliency against hardware failures and
continuity of service during occasional maintenance events in Azure datacenters. Hardware failure
resiliency relies on placing VMs into two distinct fault domains, with each fault domain representing
separate server racks. Continuity of service involves placing VMs into five update domains. The platform
ensures that separate fault domains are never updated at the same time.

Cloud service also provides load-balancing functionality. By using a load-balanced endpoint, Cloud
service distributes incoming traffic and targets a specific port across VMs within the same availability set.
In addition, Cloud service handles network address translation (NAT), allowing for connectivity to
individual VMs via its endpoints.
Optionally, you can also create a virtual network in Azure and deploy VMs in the classic deployment
model into the virtual network. This arrangement allows you to provide direct connectivity between VMs
that do not reside within the same cloud service. Creating a virtual network and deploying VMs into it is
also necessary if you want to establish direct connectivity between Azure VMs and on-premises networks.

Although all services that you provision by using the Azure classic deployment model belong to a
resource group, you might not be able to directly assign them to a specific group at the time of
provisioning. In some cases, it is possible to move classic resources across resource groups. For example,
you can move a classic VM to another resource group, if you move it along with its cloud service and all
other VMs that are part of the same cloud service. At the time of authoring this course, there is no
support for moving classic virtual networks between resource groups.

Azure classic deployment model does not support several Azure Resource Manager–specific mechanisms,
such as tags, policies, or locks. However, it is possible to use RBAC to control access to classic resources.
MCT USE ONLY. STUDENT USE PROHIBITED
1-48 Introduction to Microsoft Azure

Additional Reading: You can use an automated Azure PowerShell–based process to


migrate Azure VMs, virtual networks, and storage accounts from the classic deployment model to
the Resource Manager deployment model. For more information, refer to: “Migrate IaaS
resources from classic to Azure Resource Manager by using Azure PowerShell” at:
https://aka.ms/ovhdg9

Question: What criteria would you use when defining resource groups in your environment?
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 1-49

Lab: Managing Microsoft Azure


Scenario
Adatum Corporation wants to expand their cloud presence by taking advantage of the benefits of Azure.
Your task is to explore and compare the available IaaS features by using the Azure portal, Windows
PowerShell, and Azure CLI.

Objectives
After completing this lab, you will be able to:

• Use the Azure portals.

• Use Azure Resource Manager features via the Azure portal.

• Use Azure PowerShell.


• Use Azure CLI

Note: The lab steps for this course change frequently due to updates to Microsoft Azure.
Microsoft Learning updates the lab steps frequently, so they are not available in this manual. Your
instructor will provide you with the lab documentation.

Lab Setup
Estimated Time: 50 minutes

Virtual machine: 20533E-MIA-CL1

User name: Student

Password: Pa55w.rd

Before you start this lab, ensure that you have completed the tasks in the “Preparing the environment”
demonstration, which is in the first lesson of this module. Also, ensure that the setup script has completed.

Question: Why did you use Azure PowerShell cmdlets that contained Rm in the lab?
MCT USE ONLY. STUDENT USE PROHIBITED
1-50 Introduction to Microsoft Azure

Module Review and Takeaways


Real-world Issues and Scenarios
• You can use the Azure module for Windows PowerShell and Azure CLI to create simple and easy-to-
use provisioning scripts that enable you to create complex, cloud-based solutions and infrastructure
components on demand.

Tools
The following table lists the tools and interfaces that this module references.

Tool Use to Where to find it

The Azure portal Manage Azure Resource Use a web browser to navigate to
Manager and classic resources https://portal.azure.com.

The Azure Enterprise Portal Manage multiple Azure Use a web browser to navigate to
subscriptions under an https://ea.azure.com.
Enterprise Agreement

Azure PowerShell modules Manage Azure from Windows Install by using either the
PowerShell Microsoft Web Platform Installer
(on Windows) or standalone
installers, or from the PowerShell
Gallery. Alternatively, use Cloud
Shell from within the Azure
portal.

Azure CLI Manage Azure resources from Install by using the Web Platform
a shell interface Installer (on Windows),
standalone installers, apt-get
(Bash shell on Windows, Ubuntu,
and Debian), or curl on Mac OS.
Alternatively, use Cloud Shell
from within the Azure portal.
MCT USE ONLY. STUDENT USE PROHIBITED
2-1

Module 2
Implementing and managing Azure networking
Contents:
Module Overview 2-1
Lesson 1: Overview of Azure networking 2-2

Lesson 2: Implementing and managing virtual networks 2-24

Lab A: Using a deployment template and Azure PowerShell to implement


Azure virtual networks 2-31

Lesson 3: Configuring an Azure virtual network 2-32

Lesson 4: Configuring virtual network connectivity 2-42

Lab B: Configuring VNet Peering 2-61

Module Review and Takeaways 2-62

Module Overview
Networking is one of the primary building blocks of infrastructure solutions in Microsoft Azure. Therefore,
having a clear understanding of how to configure Azure networking components is an essential part of
practically every Infrastructure as a Service (IaaS)–based deployment. In this module, you will learn how to
provision and manage Azure networking to facilitate connectivity between the compute resources
residing in Azure, and to enable you to connect to them from your on-premises environment.

Objectives
After completing this module, you will be able to:

• Plan virtual networks in Azure.

• Implement and manage virtual networks.

• Configure cross-premises connectivity and connectivity between virtual networks in Azure.

• Configure an Azure virtual network.


MCT USE ONLY. STUDENT USE PROHIBITED
2-2 Implementing and managing Azure networking

Lesson 1
Overview of Azure networking
Azure networking components allow customers create and manage virtual private networks in Azure and
securely link them to other virtual networks or their own on-premises networking infrastructure.
Fundamental principles of Azure networking match those applicable to traditional, on-premises networks.
However, there are also several unique networking characteristics specific to Azure that you must take
into account when planning and deploying virtual networks in Azure. In this lesson, you will learn about
similarities and differences between on-premises networks and Azure virtual networks.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe the functionality of Microsoft Azure networking components.

• List features of Azure virtual networks.


• Configure virtual network interfaces of Azure VMs.

• Design IP address space and subnet ranges in Azure virtual networks.

• Configure Azure Load Balancer.

• Plan for and implement name resolution in Azure virtual networks.

Demonstration: Preparing the lab environment


Perform the tasks in this demonstration to prepare the lab environment. The environment will be
configured as you progress through this module, learning about the Azure services that you will use in
the lab.

Important: The scripts used in this course might delete objects that you have in your
subscriptions. Therefore, you should complete this course by using a new Azure subscription. You
should also use a new Microsoft account that is not associated with any other Azure subscription.
This will eliminate the possibility of any potential confusion when running setup scripts.

This course relies on custom Azure PowerShell modules, including Add-20533EEnvironment to prepare
the lab environment for the labs and Remove-20533EEnvironment to perform clean-up tasks at the end
of the module.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-3

Azure networking components


A major incentive for adopting cloud solutions
such as Azure is the ability to move on-premises
workloads to the cloud. This can save
organizations money and simplify operations by
removing the need to maintain their own server
and network infrastructure. The most
straightforward method of moving such workloads
involves deploying Azure virtual machines (VMs)
and configuring them in the same manner as their
on-premises counterparts. Because every Azure
VM must reside on an Azure virtual network, you
should first consider implementing all necessary
networking components.

Virtual networks
When you deploy computers in your on-premises environment, you typically connect them to a network
to allow them to communicate directly with each other. Azure virtual networks serve the same basic
purpose. By placing a virtual machine on the same virtual network as other virtual machines, you
effectively provide direct IP connectivity between them. You also have the option of connecting different
virtual networks together, if your intention is to provide direct IP connectivity across them. It is also
possible to connect virtual networks in Azure to your on-premises networks, effectively making Azure an
extension of your own datacenter.

Azure virtual networks support Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and
Internet Control Message Protocol (ICMP). At the time of authoring this course, there is no support for
broadcasts, multicasts, IP-in-IP encapsulated packets, and Generic Route Encapsulation (GRE) packets.

Subnets
Every virtual network in Azure consists of one or more subnets. Subnets facilitate segmentation of
networks, providing a means of controlling communication between network resources. Each subnet
contains a range of IP addresses that constitute a subset of the virtual network address space.

Network interface card


VMs use a virtual network adapter to attach to a subnet to communicate with other VMs and other
networked resources, such as load balancers or gateways. VMs can have more than one network adapter,
typically to facilitate network isolation scenarios. The maximum number of network adapters that you can
attach to a VM depends on its size.

IP addresses
When you connect Azure VMs, Azure load balancers, and application gateways to a virtual network, the
Azure platform will ensure that each of them has a unique IP address, just as servers connected to on-
premises networks do. The Azure platform allocates two types of IP addresses to Azure resources
connected to a virtual network:

• Private IP addresses. A private IP is allocated to a network adapter of a VM, an internal Azure load
balancer, or an application gateway from the IP address range of the subnet to which they are
connected. This address is used for communication within the same virtual network, across multiple,
connected virtual networks, or with on-premises networks via a virtual private network (VPN) tunnel
or a private connection known as ExpressRoute.
MCT USE ONLY. STUDENT USE PROHIBITED
2-4 Implementing and managing Azure networking

• Public IP addresses. Public IP addresses allow Azure resources to become accessible directly from the
internet. For example, to provide inbound connectivity from the internet to an Azure VM, you can
assign a public IP address to the network adapter of that Azure VM. Alternatively, you can assign a
public IP address to a load balancer, such as an external Azure load balancer or an application
gateway, in front of that VM. Public IP addresses are available in two stock keeping units (SKUs), Basic
and Standard:

o Basic SKU public IP addresses have the following characteristics:


 They support both dynamic and static allocation methods.
 You can assign them to network interfaces of Azure virtual machines, internet-facing Basic
SKU Azure load balancers, application gateways, and VPN gateways.
 You can assign them to a specific zone within an Azure region but they do not support zone-
level redundancy.
 They facilitate assignment of IPv6 public IP addresses to internet-facing Basic SKU Azure load
balancers.
o Standard SKU public IP addresses have the following characteristics:
 They support only the static allocation method.
 You can assign them to network interfaces of Azure virtual machines or internet-facing
Standard SKU Azure load balancers.
 You can assign them to a specific zone or configure them as zone redundant. You can assign
both types of IP addresses to the same Standard SKU Azure load balancer.
 They support only IPv4 addresses.

Note: At the time of authoring this course, Azure virtual networks do not support IPv6-
based connectivity. However, you can provide inbound access to virtual machines on a virtual
network via a dynamically allocated, public IPv6 address assigned to an internet-facing Basic SKU
Azure load balancer.

Virtual network-based DNS


The Domain Name System (DNS) provides resolution of user-friendly fully qualified domain names
(FQDNs), such as www.adatum.com, to the corresponding IP addresses. Azure provides built-in DNS
support within each virtual network to facilitate multiple name resolution scenarios. However, in some
cases, such as cross-premises connectivity or implementation of an Active Directory Doman Services (AD
DS) domain environment, you will need to configure your own custom DNS system.

Azure DNS
Azure DNS provides hosting of DNS zones, allowing for resolution of names within DNS namespaces that
you own, by relying on Microsoft-owned global infrastructure. Azure DNS uses anycast networking, which
delivers the quickest response to name queries by identifying the closest DNS server that is authoritative
for the zone containing that name.

Azure Load Balancer


You can use the built-in, free-of-charge Azure Load Balancer to enhance availability and scalability of
virtual machines by configuring them as a load-balanced set. Azure Load Balancer provides functionality
similar to hardware load balancers by eliminating single points of failure (application or hardware),
increasing uptime during planned maintenance or upgrades, and distributing workloads across multiple,
identically configured compute nodes. Azure Load Balancer can handle traffic originating from within the
same Azure virtual network, from any directly connected network, or from the internet. In addition, you
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-5

can configure it to implement the network address translation (NAT) capability, providing connections to
individual virtual machines in the load-balanced set. Azure Load Balancer is available in SKUs, Basic and
Standard.

Azure Application Gateway


Application Gateway provides load-balancing services at the application layer. When compared with
Azure Load Balancer, Application Gateway supports several more advanced features, including Secure
Sockets Layer (SSL) offload, cookie-based affinity, URL path–based routing, and web application firewall
(WAF).

Azure Traffic Manager


Traffic Manager is a DNS-based load-balancing solution available in Azure. Its primary strength is the
ability to load balance between endpoints in different Azure regions, non-Microsoft clouds, or your on-
premises datacenters. You can configure load balancing to support failover or to ensure that users
connect to an endpoint that is closest to their physical location.

Note: Traffic Manager is described in more detail in Module 5, “Implementing Azure App
Service.”

Network Security Groups


Network Security Groups (NSGs) are collections of network-level firewall rules that you can associate with
virtual network subnets. NSG rules allow or deny inbound and outbound traffic based on a combination
of the protocols, IP address prefixes, and ports that you specify. This allows you to configure subnet
isolation, similar to configuring on-premises perimeter networks. If you need more granular control, you
can assign NSGs to individual network adapters of Azure VMs.

Note: If you assign a Standard SKU public IP address to a network interface of an Azure
virtual machine, to allow traffic via that IP address, you must associate an NSG with that network
interface and configure its rules according to the intended traffic flow.

Service endpoints
Traditionally, if you connected from an Azure VM on a virtual network subnet to Platform as a Service
(PaaS) services such as Azure Storage or Azure SQL Database, network traffic would flow to the respective
public endpoints of these services. With the introduction of service endpoints, you can establish direct
connectivity via the Azure backbone network from individual subnets of your virtual networks to Azure
Storage accounts and Azure SQL Database servers residing in the same Azure region. This allows you to
restrict traffic to these PaaS services to connections that originate from the designated Azure VMs.

Configuring service endpoints is a two-step procedure:

1. You must enable a service endpoint for a service type, such as Azure Storage or Azure SQL Database,
for each virtual network subnet that will host Azure VMs that need to communicate with these
services.

2. The team managing an Azure Storage account and the team managing a server hosting SQL
databases residing in the same region as the virtual network must explicitly specify which subnets will
be able to communicate with their respective services. This configuration applies at the Azure
Storage–account and Azure SQL Database–server levels.
MCT USE ONLY. STUDENT USE PROHIBITED
2-6 Implementing and managing Azure networking

Routing
Azure implements a default routing configuration that facilitates basic connectivity, including the ability
to reach the internet and to communicate with other resources on the same or directly connected virtual
networks. You can modify this default configuration in two ways:

• Creating user-defined routes, which are route tables with one or more rules altering the default
routing behavior, and associate them with virtual network subnets. These rules apply to any traffic
leaving these subnets and targeting IP address ranges that you referenced as prefixes in the route
table. This allows you to affect routing behavior between subnets in the same virtual network,
between connected virtual networks, between on-premises networks and Azure virtual networks in
hybrid scenarios, and on traffic from virtual network subnets to the internet.

• Configuring Border Gateway Protocol (BGP) routing, which facilitates dynamic route exchange
between on-premises networks and Azure virtual networks in hybrid scenarios. This allows you to
affect routing behavior between on-premises networks and Azure virtual networks in hybrid scenarios
and on traffic from virtual network subnets to the internet.

Forced tunneling
Forced tunneling is a special use case of a user-defined route. You define a default route, which directs all
internet-bound traffic originating from one or more subnets on an Azure virtual network via a connection
to your on-premises network. Forced tunneling is common in scenarios where organizations want to
perform packet inspection and auditing of internet-bound traffic by using their existing on-premises
infrastructure.

Note: Service endpoints also optimize routing in forced tunneling scenarios. Without them,
traffic from an Azure VM to an Azure SQL database in the same Azure region flows via on-
premises networks. By enabling service endpoints, the traffic stays within the Azure backbone
network.

Virtual network connectivity


It is possible to allow connectivity to Azure VMs hosted on an Azure virtual network via their private IP
addresses from computers that are not connected directly to the same virtual network. If these computers
reside outside Azure, you can use one of the following methods:

• A P2S VPN
• A site-to-site VPN

• Azure ExpressRoute

If these computers reside on another Azure virtual network, you can use one of the following methods:
• VNet Peering

• VNet-to-VNet connection

You will learn more about these methods in Lesson 4 of this module, “Configuring virtual network
connectivity.”

Azure virtual network gateway


Scenarios that involve connectivity between virtual networks in different Azure regions or cross-premises
VPN connectivity require the use of a VPN gateway. Similarly, cross-premises connectivity via
ExpressRoute requires the use of an ExpressRoute gateway. Both types of gateways handle routing of
network traffic in and out of the virtual network.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-7

Overview of Azure virtual networks


An Azure virtual network constitutes a logical
boundary defined by a private IP address space
that you designate. You divide this IP address
space into one or more subnets. The process
closely resembles the design process for an on-
premises networks. However, in this case, you do
not have to manage the underlying infrastructure.
Instead, the networking features, such as routing
between subnets on the same virtual network and
to the internet and DNS-based name resolution,
are automatically available. As a result, every
virtual machine can access the internet by default.

Note: You can alter the default routing and name resolution functionality within Azure
virtual networks. You can also control network connectivity by allowing or blocking
communication on the subnet or the VM network–interface level. You will learn more about
these capabilities later in this module.

Each virtual network resides in a specific Azure region (referred to as Location in the Azure portal). The
choice of region determines the location of the Azure VMs that you subsequently deploy into the virtual
network. After you create a virtual network, you cannot change its associated region.

Note: Azure virtual networks cannot span multiple Azure regions.

In addition to choosing the Azure region, you must also specify the scope of the IP addresses that will be
automatically assigned to virtual machines that you deploy into that virtual network. Although the scope
can include public IPv4 ranges, almost all Azure virtual networks use the same set of private IPv4 spaces as
most on-premises network implementations. These IP address spaces are defined by RFC 1918 and
include the following:

• 10.x.x.x

• 172.16.x.x – 172.31.x.x

• 192.168.x.x

Note: You should avoid overlapping address spaces across your Azure virtual networks and
your on-premises networks. Overlapping address spaces will prevent you from connecting these
networks.

The Azure platform uses the Dynamic Host Configuration Protocol (DHCP) service to allocate IP addresses
from the ranges you assign to virtual network subnets. Each IP address lease has an infinite duration, but
the lease is released if you deallocate (stop) the virtual machine to which the IP address is assigned. To
avoid IP address changes regardless of the state of the virtual machine, you can configure a static private
IP address from the range of IPv4 addresses associated with the virtual network.
MCT USE ONLY. STUDENT USE PROHIBITED
2-8 Implementing and managing Azure networking

Note: A static private IP address in Azure corresponds to a DHCP reservation in the on-
premises networking terms. In on-premises scenarios, assigning a static IP address involves
modifying the configuration of the network interface within the operating system. You must not
use this method with Azure virtual machines because it will result in connectivity failures. Instead,
you must modify the properties of the network interface attached to the virtual machine via the
Azure management interface. To do so, use the Azure portal, Azure command line interface
(CLI) command az network nic ip-config update, or the Windows PowerShell cmdlet
Set-AzureRmNetworkInterface.

For an Azure VM to be accessible from the internet, you must associate it with either a static or dynamic
public IP address. To accomplish this, you can assign an IP address directly to a network adapter of the
Azure VM or to an internet-facing load balancer in front of that Azure VM. You can configure a load
balancer to distribute inbound traffic from the internet across multiple virtual machines in a load-
balanced manner. In addition, you can configure NAT on the load balancer and direct incoming traffic on
a specific TCP port to an individual virtual machine behind it. This way you can share the same public IP
address across multiple virtual machines.

A public IP address constitutes a separate object in the Azure Resource Manager deployment model. This
means you can manage it independently of both the load balancer and an Azure VM’s network adapter.
For example, you can use the following Azure PowerShell command to create a public IP address by using
the static allocation method:

New-AzureRmPublicIpAddress -Name PublicIP -ResourceGroupName AdatumRG -Location centralus


–AllocationMethod Static -DomainNameLabel loadbalancernrp

Alternatively, you can accomplish the same outcome by using the following Azure CLI command:

Az network public-ip --create --resource-group AdatumRG --location centralus --allocation-


method Static --dns-name loadbalacernrp

Subnets
To assign IP addresses from the IP address space of a virtual network to Azure VMs, you must first create
one or more virtual network subnets. Subnets divide your virtual network into smaller IP ranges so that
the resources organized within these subnets can be logically separated. Each subnet contains a range of
IP addresses that fall within the virtual network address space.

The use of multiple subnets is common when implementing multi-tier applications. Allocating one subnet
per tier makes it straightforward to use NSGs to prevent unauthorized communications between tiers. If
each tier resides on a separate subnet, you can assign a dedicated network security group to each subnet.

Note: You will learn more about NSGs later in this module in Lesson 3, “Configuring an
Azure virtual network,” in the topic, “Configuring Network Security Groups.”

Virtual network–based DNS


Names of resources created in Azure can be resolved by using the Azure-provided DNS service or a
customer-provided DNS server. The Azure-provided DNS service is available by default and is sufficient in
some scenarios. For example, the client DNS resolver on an Azure virtual machine can use the Azure-
provided DNS service to resolve the internet-based names. The same DNS service allows for automatic
name resolution between virtual machines that reside on the same virtual network.
There are, however, situations in which you must implement a custom DNS server. This applies, for
example, when implementing hybrid connectivity between an Azure virtual network and an on-premises
network. Another common scenario involves deploying your own Active Directory domain environment in
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-9

Azure. In both cases, you must configure an operating system of each Azure virtual machine to use your
own DNS server. Typically, you accomplish this by modifying the properties of the Azure virtual network.
You can also override the virtual network setting by assigning a DNS server directly to a network adapter
of a VM. In either case, you must restart the operating system for the new assignment to take effect. This
is different from the way DHCP operates in on-premises networks, where such changes take place
dynamically following the renewal of a DHCP lease.

Virtual network connectivity


As mentioned in the previous topic, it is possible to allow connectivity to Azure VMs hosted on an Azure
virtual network via their private IP addresses from computers that are not connected directly to the same
virtual network. If these computers reside outside Azure, then you can use one of the following methods:

• A point-to-site VPN that connects individual computers to an Azure virtual network via a Secure
Socket Tunneling Protocol (SSTP) tunnel over the internet.

• A site-to-site VPN that connects an on-premises network to an Azure virtual network via an IPsec
tunnel over the internet.

• Azure ExpressRoute that connects an on-premises network via a private connection. ExpressRoute
provides more predictable performance, with higher bandwidth and lower latency than VPN
connections.
If these computers reside on another Azure virtual network, you can use one of the following methods:

• VNet Peering that connects Azure virtual networks within the same Azure region. The traffic between
virtual networks flows directly over the Azure backbone network.

Note: At the time of authoring this content, VNet Peering between Azure regions is in
public preview.

• VNet-to-VNet that connects Azure virtual networks in the same Azure region or in different Azure
regions via a pair of virtual gateways that encrypt network traffic. Using VNet-to-VNet is similar to a
site-to-site VPN. However, in this case, cross-region traffic doesn’t traverse the internet but is routed
over the Azure backbone network.

Cross-premises connectivity
Each of the cross-premises connectivity methods has unique benefits and was created for specific
scenarios. However, the methods share the same basic purpose. By creating a cross-premises connection
to an Azure virtual network, you allow users to connect to cloud-based resources the same way that they
connect to local resources.
Point-to-site

Use point-to-site connections when you have up to 128 client computers that you want to connect to an
Azure virtual network. Computers with a point-to-site VPN can use that connection from any location
with internet access. There is no need for dedicated hardware or software. You can leverage an Internet
Key Encryption v2 (IKEv2) VPN client built directly into Windows operating systems when using certificate
authentication. Alternatively, you can use Remote Authentication Dial-In User Service (RADIUS)
authentication from Windows, Mac OS X, and Linux. This method requires a RADIUS server, which can
reside either on-premises or in the Azure virtual network. RADIUS supports Active Directory
authentication and third-party identity providers. It also allows you to implement multi-factor
authentication. With either method, you must deploy a VPN gateway in the Azure virtual network. In the
first scenario, the VPN gateway handles certificate-based authentication. In the second scenario, the VPN
gateway relays authentication requests to the RADIUS server.
MCT USE ONLY. STUDENT USE PROHIBITED
2-10 Implementing and managing Azure networking

The throughput of the VPN gateway determines the available bandwidth, which is shared by all incoming
VPN connections. The throughput varies depending on the VPN gateway SKU, with support for up to 1.25
gigabits per second (Gbps). This type of solution offers a convenient option for connecting individual
computers to one or more Azure virtual networks, without having to invest into an on-premises VPN
infrastructure or a private circuit. Point-to-site VPN is common in development, test, and lab
environments that rely on connectivity to on-premises infrastructure.

Site-to-site
A site-to-site VPN connects an on-premises network to an Azure virtual network via an IPSec VPN tunnel.
This requires an on-premises VPN infrastructure that routes traffic to and from the Azure virtual network.
For this purpose, you can use either a hardware VPN device or a software-based VPN service such as the
Routing and Remote Access Service (RRAS) running on a Windows server or the Linux-based Openswan.
In addition, you need to modify the on-premises routing configuration to ensure that the traffic targeting
Azure virtual networks reaches its destination.

Use site-to-site connections when you need to connect multiple on-premises network computers to one
or more Azure virtual networks. Note that computers must have a direct connection to the on-premises
network to use site-to-site VPN, which is not the case when you use point-to-site VPN. You can facilitate
both types of connectivity by configuring the same VPN gateway to support site-to-site VPN and point-
to-site VPN. However, if you do so, the VPN gateway throughput (up to 1.25 Gbps) is shared across all
connections. In addition, the performance depends on the bandwidth and latency of your on-premises
internet connection.

Additional Reading: You can determine the latency of your internet connection to the
nearest Azure datacenters by using: “Azure Speed Test” at: https://aka.ms/ywzt2s

It is possible to connect multiple Azure virtual networks and multiple on-premises networks via a
combination of site-to-site VPN, VNet-to-VNet, and VNet Peering connections. This effectively allows for
sharing resources residing on the same Azure virtual network across multiple on-premises locations.

ExpressRoute
The ExpressRoute service relies on a private connection between your datacenter and an Azure datacenter
via a non-Microsoft connectivity provider. The connection supports links to multiple Azure virtual
networks (potentially in different Azure regions) via their respective ExpressRoute gateways. By
eliminating the dependency on internet connectivity, you can ensure consistent, reliable performance
levels. The expected latency is a few milliseconds and you can increase the maximum bandwidth beyond
what the VPN-based methods offer. As with site-to-site VPN, connectivity via ExpressRoute requires that
clients reside on-premises.

ExpressRoute provides other unique benefits compared to the two VPN-based solutions. With
ExpressRoute, you can directly connect—without crossing the public internet—to most public Azure
services that do not reside on Azure virtual networks. Such services include Azure Storage, Azure SQL
Database, and the Web Apps feature of Azure App Service. ExpressRoute also supports direct connectivity
to Microsoft Office 365 services.

Note: ExpressRoute-based Azure virtual network connectivity relies on the routing


functionality referred to as private peering. Similarly, connectivity to Azure public services relies
on public peering. Microsoft peering supports connectivity to Office 365, Microsoft Dynamics 365,
and Azure public services.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-11

ExpressRoute offers per-circuit throughput of up to 10 gigabits per second (Gbps), with the per-gateway
throughput of up to 9000 megabits per second (Mbps). These capabilities make ExpressRoute the
preferred choice for enterprise and mission-critical workloads. ExpressRoute also might be worth
considering when implementing an Azure region as a disaster recovery site or as the backup destination
for on-premises systems. Other common scenarios that involve ExpressRoute include hybrid big data and
big compute solutions.

Connectivity between virtual networks


There are two methods to connect two Azure virtual networks directly: VNet-to-VNet and VNet Peering.

VNet-to-VNet

To connect virtual networks residing in the same Azure region or different Azure regions, you can
establish a VPN tunnel in a manner equivalent to setting up a Site-to-Site VPN between Azure virtual
networks and an on-premises location. This requires provisioning a VPN gateway in each of the virtual
networks. The choice of the VPN gateway SKU has directly affects the bandwidth and latency of the
VNet-to-VNet connection. Each VNet can be in a different Azure subscription.

VNet Peering

Alternatively, you can connect two Azure virtual networks in the same Azure region or different Azure
regions by leveraging the VNet Peering capability. This allows for direct connectivity between the virtual
networks without deploying VPN gateways, eliminating negative performance impact. At least one of the
virtual networks in a peering arrangement must be an Azure Resource Manager resource; it is not possible
to use VNet Peering to connect two classic virtual networks. Each VNet can be in different Azure
subscriptions, but both subscriptions must be associated with the same Azure AD tenant. This restriction
does not apply to VNet-to-VNet connections.

Note: At the time of authoring this content, VNet Peering between Azure regions is in
public preview.

Note: Although you can use either VNet Peering or VNet-to-VNet connection when
connecting two Azure virtual networks, we recommend using VNet Peering. This method delivers
better performance and does not require provisioning VPN gateways. In addition, in scenarios
where both virtual networks must be accessible from your on-premises locations, this method
supports routing cross-premises traffic via a VPN gateway on a peered virtual network. This
allows you to use a single VPN gateway on one of the virtual networks instead of both.
On the other hand, it is important to note that VNet Peering does not apply encryption to traffic
flowing between virtual networks. If customers need to ensure encryption of communication
across virtual networks, they should consider using VNet-to-VNet or applying encryption at the
application level.
In addition, the pricing models for the two connectivity methods differ. Cost of VNet Peering is
directly proportional to the amount of inbound and outbound data transfer. The cost of VNet-to-
VNet connections consists of per hour charges for VPN gateways. In case of cross-region
connectivity, VNet-to-VNet connection pricing also includes outbound data charges.
MCT USE ONLY. STUDENT USE PROHIBITED
2-12 Implementing and managing Azure networking

Virtual gateways
For connectivity between Azure virtual networks in different regions or between an Azure virtual network
and an on-premises location, you have to provision a virtual gateway on each Azure virtual network.
Characteristics of the gateway depend on a few of factors:

• Gateway type determines supported connectivity type:

o VPN. This type indicates that the gateway supports point-to-site, site-to-site, and VNet-to-VNet.

o ExpressRoute. This type indicates that the gateway supports linking a virtual network to an
ExpressRoute circuit.

A virtual network can contain one VPN gateway of each type, which facilitates using Site-to-Site VPN
as a failback for ExpressRoute.
• VPN device type determines routing capabilities of the VPN gateway along with a number of
additional functional implications (covered in the next paragraph). There are two VPN device types
available:

o Policy-based (formerly known as static). Policy-based VPN devices operate according to local
IPSec policies that you define. The policies determine whether to encrypt and direct traffic that
reaches an IPSec tunnel interface based on the source and target IP address prefixes.
o Route-based (formerly known as dynamic). Route-based VPN devices rely on routes in the local
route table that you define to deliver traffic to a specific IPSec tunnel interface, which, at that
point, performs encryption and forwards the encrypted network packets. In this case, any traffic
reaching the interface is encrypted automatically and forwarded to the VPN gateway on the
other end of the tunnel.

• The Azure resource SKU determines the capacity and performance characteristics of the gateway. In
addition, its choice might affect the routing capabilities of the VPN gateway. For example, policy-
based gateways are supported only with the Basic SKU.

Azure VPN gateway is available in the following four SKUs:


o Basic SKU offers up to 100 Mbps of throughput and supports both policy-based and route-based
VPN gateways. However, it does not support BGP routing. As a result, it does not allow you to set
up active-active Site-to-Site VPN gateway connections to the same on-premises site. With route-
based VPN gateways, you can establish up to 10 VPN tunnels per VPN gateway to different on-
premises sites and Azure virtual networks.

o VpnGw1 SKU offers up to 500 Mbps of throughput and supports route-based VPN gateways. It
allows you to configure BGP routing with route-based gateway and active-active Site-to-Site VPN
gateway connections to the same on-premises site. It allows you to establish up to 30 VPN
tunnels per VPN gateway to on-premises sites and Azure virtual networks.

o VpnGw2 SKU offers up to 1 Gbps of throughput and supports route-based VPN gateways. It
allows you to configure BGP routing with route-based gateway and active-active Site-to-Site VPN
gateway connections to the same on-premises site. You can establish up to 30 VPN tunnels per
VPN gateway to on-premises sites and Azure virtual networks.

o VpnGw3 SKU offers up to 1.25 Gbps of throughput and supports route-based VPN gateways. It
allows you to configure BGP routing with route-based gateway and active-active Site-to-Site VPN
gateway connections to the same on-premises site. You can establish up to 30 VPN tunnels per
VPN gateway to on-premises sites and Azure virtual networks.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-13

ExpressRoute gateway is available in the following four SKUs:

o Basic SKU offers up to 500 Mbps of throughput. It does not support coexistence of the VPN
gateway on the same virtual network.

o Standard SKU offers up to 1000 Mbps of throughput. It supports coexistence of the VPN gateway
on the same virtual network.

o HighPerformance SKU offers up to 2000 Mbps of throughput. It supports coexistence of the VPN
gateway on the same virtual network.
o UltraPerformance SKU offers up to 9000 Mbps of throughput. It supports coexistence of the VPN
gateway on the same virtual network.

Note: At the time of authoring this course, ExpressRoute Basic SKU is deprecated.

Note: You can increase or decrease the SKU of a VPN gateway between VpnGw1, VpnGw2,
and VpnGw3 as needed. You also can resize the ExpressRoute gateways between the Standard
and High Performance SKUs. To change the remaining types of SKUs, you must recreate the
gateway.

The throughput of ExpressRoute virtual gateway SKUs listed above represents the bandwidth between
your on-premises locations and the virtual network hosting the gateway. An ExpressRoute circuit can
accommodate links to multiple virtual networks. By default, a single circuit supports up to 10 virtual
network links. You can increase the number of virtual network links per circuit by purchasing the
ExpressRoute Premium add-on. At that point, the maximum number of links per circuit depends on the
circuit size, with up to 20 links for the smallest circuit size of 50 Mbps and up to 100 links for the largest
circuit size of 10 Gbps.

Additional Reading: For details regarding the ExpressRoute capacity limits, refer to:
“ExpressRoute FAQ” at: https://aka.ms/fnysfp

The choice of the VPN device type has a number of additional implications:
• Policy-based VPN devices support only a single site-to-site connection. With route-based VPN
devices, that number depends on the Azure VPN gateway SKU, with up to 10 connections with the
Basic SKU and up to 30 connections with the VpnGw1, VpnGw2, and VpnGw3 SKUs.

Additional Reading: It is possible to connect multiple policy-based on-premises devices to


a single route-based Azure VPN gateway by leveraging custom IPSec/Internet Key Exchange (IKE)
policies. For more information, refer to: “Connect Azure VPN gateways to multiple on-premises
policy-based VPN devices using PowerShell” at: https://aka.ms/tg19xx

• Policy-based VPN devices do not support point-to-site VPNs. This becomes an important factor if you
want to provide shared access to an Azure virtual network to clients connecting via a site-to-site VPN
and a point-to-site VPN. Effectively, to implement this functionality, you would have to use a route-
based VPN gateway in Azure.
MCT USE ONLY. STUDENT USE PROHIBITED
2-14 Implementing and managing Azure networking

• From the encryption standpoint, policy-based VPN devices support the Internet Key Exchange
version 1 (IKEv1), AES256 (Advanced Encryption Standard), and AES128 3DES (Data Encryption
Standard) encryption algorithms, in addition to the SHA1 (Secure Hash Algorithm) hashing algorithm.
Route-based VPN devices offer support for the IKEv2 and AES256 3DES encryption algorithm (during
IKE Phase 1 setup) as well as both the SHA1 and the SHA2 hashing algorithms (again, during IKE
Phase 1 setup). They also support perfect forward secrecy (DH Group1, 2, 5, 14, and 24).

Overview of network interfaces


An Azure VM connects to a subnet of an Azure
virtual network via its network adapters, the
number of can vary between one and eight. The
Azure platform requires that you attach at least
one network adapter to each VM. This becomes
the primary network adapter. Any additional
network adapters are referred to as secondary
network adapter. Each network adapter should
reside on a different subnet, but they must all be
connected to the same virtual network. The
maximum number of network adapters that you
can attach to an Azure VM depends on its size.

Note: The Azure platform assigns the default gateway to the primary network adapter of
an Azure VM. As a result, by default, that VM’s secondary network adapters can communicate
only with resources residing on the same subnet to which they are connected. If you want to
allow traffic to other subnets, you can define a custom route table within the operating system of
the Azure VM. In addition, you should create user-defined routes that direct the traffic from
those subnets back to the same network adapter.

By default, each network adapter receives a single private IP address from the subnet’s range of IP
addresses. That IP address becomes part of the primary IP configuration of the network adapter. You can
create multiple secondary configurations with their own IP addresses, up to the limit that the platform
imposes. You can provide direct inbound connectivity from the internet to the same VM by adding one or
more public IP addresses to IP configurations, within the platform-imposed limits. Another way to provide
inbound connectivity from the internet to an Azure VM is by deploying an internet-facing load balancer
and adding the VM to its back-end pool.

Note: At the time of authoring this course, a single network adapter has the default limit of
50 private IP addresses. An internet-facing load balancer has the default limit of 10 public IP
addresses. If you need to increase these limits, contact Azure support.

Azure Resource Manager defines the network adapter as a separate networking resource that is
associated with a VM. Each network adapter has several properties that allow for customizing its
configuration. Some of the most important properties are:

• virtualmachine. Specifies the current VM that is associated with that network adapter.

• macaddress. Presents the media access control (MAC) address for the network adapter.

• networksecuritygroup. Provides reference to the network security group resource associated with
the network adapter.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-15

• dnsSettings. Provides DNS settings for the network adapter.

• ipconfigurations. Contains IP address configurations of the network adapter.

IP address configuration is bound to a network adapter by using the ipConfigurations child object.
You can use the NewAzureRMNetworkInterface Azure PowerShell cmdlet with the -PublicIpAddress
parameter or az network nic create with the –public-ip-address parameter to create a network adapter
with a public IP address. To assign a public IP address to an existing network adapter, run the
Set-AzureRmNetworkInterface Windows PowerShell cmdlet or the az network nic ip-config update
Azure CLI command.

Overview of private IP addresses


Private IP addresses are required for Azure
resources such as VMs, internal load balancers,
or web application gateways. This facilitates
communication within the virtual network, with
resources on any other virtual network connected
to it, and, potentially, with on-premises resources,
if you configured hybrid connectivity.

Azure uses DHCP to assign dynamic and static


private IP addresses. The addresses belong to the
IP address ranges that you allocated when
creating virtual network subnets. The DHCP lease
is infinite, which means that IP addresses remain
allocated as long as the VM is in use. If you place the VM in the stopped (deallocated) state, then the
platform will release its dynamic IP address, returning it back to a pool maintained by DHCP. As a result,
DHCP might assign that IP address to another resource on the same subnet. To prevent this situation—for
example, if a VM hosts a DNS service —you can designate its IP address as static. Static private IP
addresses are also needed when you control network access by using a firewall with rules referencing a
source or target IP address.
You can assign a static private IP addresses either during VM creation or at any point afterwards. The
address assignment takes place on the Azure VM network adapter level, rather than within the operating
system. To create such assignments, you can use the Azure portal, Azure PowerShell, Azure CLI, or an
Azure Resource Manager template. Note that setting a static IP address triggers a reboot of the operating
system within the Azure VM.

Assigning IP address to a network adapter of an Azure VM


The following command retrieves the reference to a virtual network, and then save this reference in the
$vnet variable:

$vnet = Get-AzureRmVirtualNetwork -ResourceGroupName AdatumRG -Name AdatumVNet


MCT USE ONLY. STUDENT USE PROHIBITED
2-16 Implementing and managing Azure networking

Once you have this reference, you can use the New-AzureRmNetworkInterface cmdlet with the
PrivateIpAddress switch to create a network adapter with a static private IP address. For example, the
following Azure PowerShell command creates a network adapter with the name AdatumNic and the
private IP address 192.168.0.10, from the first subnet of the virtual network named AdatumVnet:

$nic = New-AzureRmNetworkInterface -Name AdatumNIC -ResourceGroupName AdatumRG -Location


centralus -SubnetId $vnet.Subnets[0].Id -PrivateIpAddress 192.168.0.10

To add this network adapter with a static private IP address during VM creation, you use the following
Azure PowerShell cmdlet:

Add-AzureRmVMNetworkInterface -VM $vm -Id $nic.Id

This cmdlet references configuration parameters for the VM stored in the $vm variable, and network-
related configuration parameters for the network adapter stored in the $nic variable.

To change the allocation of a private IP address for an existing network adapter to static and to store it in
the $nicName variable, you can use the following sequence of commands:

$nic = Get-AzureRmNetworkInterface –ResourceGroup AdatumRG –Name $nicName


$nic.IpConfigurations[0].PrivateIpAllocationMethod = ‘Static’
Set-AzureRmNetworkInterface -NetworkInterface $nic

To create a new network adapter with a static private IP address by using Azure CLI, run the following
command:

az network nic create \


--resource-group AdatumRG \
--name AdatumNIC \
--location centralus \
--subnet default \
--private-ip-address 192.168.0.10 \
--vnet-name AdatumVNet

Once you create a new network adapter, you can attach it to a new VM during its deployment by running
the az vm create command with the -nics parameter.

Overview of load balancers


Azure offers several different types of load
balancers that integrate with its IaaS and PaaS
services, including Azure VMs and virtual machine
scale sets. You can also implement non-Microsoft
load balancers available in Azure Marketplace,
including virtual appliances from vendors such as
F5 or KEMP.

Azure Basic Load Balancer


You can use Azure Basic Load Balancer to facilitate
availability and scalability of up to 100 Azure VMs
in the same availability set or a virtual machine
scale set. You can also deploy Azure Basic Load
Balancer in front of an individual Azure VM that is not part of an availability set. For multi-VM or virtual
machine scale set deployments, you can load-balance traffic that targets specific IP addresses and specific
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-17

TCP or UDP ports across multiple VMs. For single-VM deployments, the load balancer provides the NAT
functionality.

There are two types of Azure Basic Load Balancer:

• Internal load balancer. The internal load balancer enables you to load-balance traffic from the same
virtual network, other directly connected virtual networks, or an on-premises location connected via
Site-to-Site VPN or ExpressRoute. You might use internal load balancers for the following types of
connections:

o Between on-premises computers and VMs in an Azure virtual network

o Between multi-tier applications with back-end tiers that are not on the internet, but require load-
balanced traffic from the internet-facing tier
• Internet-facing load balancer. The internet-facing load balancer enables you to load balance traffic
from the internet. It also supports NAT, allowing connectivity to a specific Azure VM in a load-
balanced set via a designated port that you configure.

Both load balancers control the flow of traffic targeting IP addresses and ports assigned to their front-end
configuration across a set of virtual machines residing within a subnet of a virtual network. Incoming
traffic is subject to load balancer rules and inbound NAT rules that you define. The outcome of rule
processing determines which virtual machine behind the load balancer becomes the recipient of that
traffic.

To configure Azure Basic Load Balancer, provide the following details:


• Front-end IP configuration. Identifies one or more IP addresses that are accepting incoming traffic
that needs to be load balanced.

• Back-end address pool. Designates the virtual machines that receive network traffic from the load
balancer.
• Load-balancing rules. Determine how to distribute incoming traffic across virtual machines in the
back-end address pool.

• Probes. Verify the health and availability of virtual machines in the back-end pool.
• Inbound NAT rules. Determine the types of traffic that should be redirected to individual VMs in the
back-end pool rather than being distributed across the VMs.

You can use the Azure portal, Azure PowerShell, Azure CLI, Azure Resource Manager templates, or REST
API to create a load balancer. For example, to create a virtual network, a virtual network subnet, and an
external load balancer that will balance incoming network traffic on port 443 and provide connectivity on
port 3389 to two back-end VMs, you could use the following Azure PowerShell–based procedure.

Create a resource group and virtual network by using Azure PowerShell


1. Create a new resource group:

New-AzureRMResourceGroup –Name AdatumRG –Location centralus

2. Create a new virtual network with the name AdatumVnet and an address space (in this example
192.168.0.0/16), and store a reference to the virtual network in the $vnet variable:

$vnet = New-AzureRMVirtualNetwork –ResourceGroupName AdatumRG –Name AdatumVnet –


AddressPrefix 192.168.0.0/16 –Location centralus

3. Add a virtual network subnet:

$backendSubnet = Add-AzureRmVirtualNetworkSubnetConfig -Name AdatumSubnet -


VirtualNetwork $vnet -AddressPrefix 192.168.0.0/24
MCT USE ONLY. STUDENT USE PROHIBITED
2-18 Implementing and managing Azure networking

4. Update the configuration in the virtual network:

Set-AzureRMVirtualNetwork –VirtualNetwork $vnet

Create a Public IP address:


• Create an Azure Public IP address resource named PublicIP, to be used by a front-end IP pool:

$publicIP = New-AzureRmPublicIpAddress -Name PublicIP -ResourceGroupName AdatumRG -


Location centralus –AllocationMethod Static -DomainNameLabel adatumlb

Create front-end and back-end IP address pool:


1. Create a front-end IP configuration named LB-Frontend, that uses the Public IP address, and then
store the value in the variable $frontendIP:

$frontendIP = New-AzureRmLoadBalancerFrontendIpConfig -Name LB-Frontend -


PublicIpAddress $publicIP

2. Create a back-end address pool named LB-backend, and then store the value in the variable
$beIPPool:

$beIPPool = New-AzureRmLoadBalancerBackendAddressPoolConfig -Name LB-backend

Create a load-balancer rule, a NAT rules, a probe, and a load balancer


1. Create the NAT rules that will redirect all incoming traffic on port 3441 and 3442 to port 3389 on
back-end VMs:

$inboundNATRule1 = New-AzureRmLoadBalancerInboundNatRuleConfig -Name RDP1 -


FrontendIpConfiguration $frontendIP -Protocol TCP -FrontendPort 3441 -BackendPort
3389
$inboundNATRule2= New-AzureRmLoadBalancerInboundNatRuleConfig -Name RDP2 -
FrontendIpConfiguration $frontendIP -Protocol TCP -FrontendPort 3442 -BackendPort
3389

2. Create a health probe that will check the health status on a page named HealthDemo.aspx:

$healthProbe = New-AzureRmLoadBalancerProbeConfig -Name HealthProbe -RequestPath


'HealthDemo.aspx' -Protocol http -Port 80 -IntervalInSeconds 15 -ProbeCount 2

Note: There is no support for HTTPS-based custom probes.

3. Create the load-balancer rule to balance all incoming traffic on port 443 to the back-end port 443 on
the addresses in the back-end pool:

$lbrule = New-AzureRmLoadBalancerRuleConfig -Name HTTP -FrontendIpConfiguration


$frontendIP -BackendAddressPool $beIPPool -Probe $healthProbe -Protocol Tcp -
FrontendPort 443 -BackendPort 443

4. Create load balancer named AdatumLB that will use previously configured rules:

$lb = New-AzureRmLoadBalancer -ResourceGroupName AdatumRG -Name AdatumLB -Location


centralus -FrontendIpConfiguration $frontendIP -InboundNatRule
$inboundNATRule1,$inboundNATRule2 -LoadBalancingRule $lbrule -BackendAddressPool
$beIPPool -Probe $healthProbe
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-19

Create network adapters and configure a back-end IP address pool


1. Create network adapters:

$backendnic1= New-AzureRmNetworkInterface -ResourceGroupName AdatumRG -Name nic1 -


Location centralus -PrivateIpAddress 192.168.0.11 -Subnet $backendSubnet -
LoadBalancerBackendAddressPool $lb.BackendAddressPools[0] -LoadBalancerInboundNatRule
$lb.InboundNatRules[0]
$backendnic2= New-AzureRmNetworkInterface -ResourceGroupName AdatumRG -Name nic2 -
Location centralus -PrivateIpAddress 192.168.0.12 -Subnet $backendSubnet -
LoadBalancerBackendAddressPool $lb.BackendAddressPools[0] -LoadBalancerInboundNatRule
$lb.InboundNatRules[1]

2. Update the existing network adapter configuration with a back-end IP address pool:

$backednic1.IpConfigurations[0].LoadBalancerBackendAddressPool=$beIPPool
$backednic2.IpConfigurations[0].LoadBalancerBackendAddressPool=$beIPPool
Set-AzureRmNetworkInterface –NetworkInterface $backednic1
Set-AzureRmNetworkInterface –NetworkInterface $backednic2

Additional Reading: To use Azure CLI to create an internet-facing balancer, refer to:
“Creating an internet load balancer using the Azure CLI” at: https://aka.ms/mh30m8

Additional Reading: To use the Azure portal to create a load balancer, refer to: “Creating
an Internet-facing load balancer using the Azure portal” at: https://aka.ms/pe9pw4

Azure Standard Load Balancer


You can use Azure Standard Load Balancer to facilitate availability and scalability of up to 1000 Azure VM
instances in the same or different availability zones. You can use a combination of Azure VM instances
that are standalone, belong to different availability sets, or are part of virtual machine scale sets.
Azure Standard Load Balancer supports both internal and internet-facing configuration; however, the
latter requires the use of a standard SKU public IP address. This SKU is zone aware and can have one of
the following two settings:

• Zone-redundant

• Zonal

When you assign a zone-redundant public IP address to the front end of an Azure Standard Load
Balancer, the load balancer will distribute traffic across all virtual machines in a load-balanced scenario
where back-end virtual machines reside in different availability zones. Failure of an individual zone will not
affect the availability of the workload that each of them hosts. On the other hand, when you use zonal
public IP address, network traffic to that IP address will target back-end virtual machines in the same
availability zone, and failure of the zone will affect the availability of the workload.

You can use zonal configuration in combination with DNS load-balancing solutions, such as Traffic
Manager, to provide resiliency not only across availability zones but also across multiple regions. Zonal
configuration also allows you to configure custom monitoring of availability of your workload in
individual zones.

When you use internal Azure Standard Load Balancer, you can increase redundancy by implementing a
high availability ports load balancing rule. The rule automatically implements per flow load balancing on
ephemeral ports targeting the front-end IP of the load balancer. This capability facilitates a range of
scenarios where it is not feasible to explicitly configure load balancing on specific ports, such as
active/active load-balancing, that involve network virtual appliances.
MCT USE ONLY. STUDENT USE PROHIBITED
2-20 Implementing and managing Azure networking

Azure Standard Load Balancer significantly increases insights into its operational state by offering
enhanced diagnostics. It also enforces security in the external configuration because it requires
configuration of NSGs for subnets or network adapters for all VM instances in the back-end pool.

Additional Reading: For details regarding functionality and implementation of Azure


Standard Load Balancer, refer to: “Azure Load Balancer Standard overview” at:
https://aka.ms/Yolstd

Application Gateway
Application Gateway provides routing and load-balancing services at the application layer. You can use
application gateways in scenarios that require the following features that Azure Basic or Standard Load
Balancer does not support:

• SSL offload. After uploading a server certificate and creating a listener on port 443, you can configure
an application gateway with routing rules that terminate an SSL session at the gateway. This
eliminates the need to decrypt incoming traffic on the load-balanced Azure VMs, thereby improving
their performance.
• Cookie-based affinity. An application gateway consistently redirects requests from a given client to
the same Azure VM in the load-balanced set.

• URL-based routing. An application gateway directs incoming requests to different pools of load-
balanced servers depending on the URL path within these requests.
• WAF. The firewall implements a set of customizable rules that protect a load-balanced set of VMs
from common web-based exploits, such as Structured Query Language (SQL) injection or cross-site
scripting.
You can configure an application gateway as internet-facing or run it entirely within an Azure virtual
network. Its endpoints can consist of public IP addresses or Azure internal IP addresses representing Azure
VMs, Azure Cloud Services instances, or Web Apps.

Note: At the time of authoring this course, there is no support for static assignment for an
internet-facing application gateway’s public IP address.

Traffic Manager
Traffic Manager is a DNS-based load-balancing solution available in Azure. Unlike Azure Load Balancer or
Application Gateway, which redirect incoming traffic, Traffic Manager relies on a customizable DNS name
resolution mechanism to ensure that the traffic reaches the most suitable load-balanced endpoint. That
endpoint represents a web application or service and can reside in any internet-accessible location.
Effectively, it is possible to use Traffic Manager to load-balance across the entire globe, targeting different
Azure regions, other cloud providers, or on-premises datacenters. You can use Traffic Manager’s load-
balancing algorithms to customize load-balancing behavior. For example, you can minimize the response
time by redirecting traffic to the endpoint closest to the request’s origin. Alternatively, you can distribute
incoming requests across multiple endpoints according to custom-defined weight values that you assign
to each endpoint.

Note: Module 5, “Implementing Azure web app services,” will describe Traffic Manager in
more detail.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-21

You can combine load-balancing solutions to leverage their complementary strengths. An example of
such design relies on Traffic Manager to provide global load balancing, with its endpoints representing
internet-facing instances of Application Gateway. In turn, each Application Gateway instance points to
load-balanced sets of Azure VMs behind multiple Azure load balancers, with URL-based routing
distributing the traffic between them.

Overview of Azure DNS


Azure DNS is a managed service that hosts
internet-facing DNS zones to provide name
resolution by using the Microsoft global DNS
infrastructure. Azure DNS implements anycast
networking, which provides the fastest response
to name queries by directing them to the DNS
server closest to their origin. You can designate
Azure DNS servers as authoritative for a new DNS
domain name, or you can add them as
authoritative servers for your existing DNS zones.

Note: At the time of authoring this course,


Azure DNS does not support purchasing and registering new domains. After you register a new
domain with a non-Microsoft registrar, you can use Azure DNS to host it.

In either case, delegation for your zone at your registration authority should include the Azure DNS name
servers that host your DNS zone. Name servers in Azure DNS are allocated automatically from the pool
during zone creation. You can view currently allocated name servers for your zone by running the
following Azure PowerShell cmdlet:

Get-AzureRmDnsRecordSet –Name “@” –RecordType NS –Zone $zone

The $zone variable should reference your Azure DNS-hosted zone.

You can create and manage zones by using the Azure portal, Azure PowerShell, and Azure CLI.

To create an Azure DNS zone by using Azure PowerShell, perform the following steps:

1. After authenticating to your Azure subscription, create a new resource group:

New-AzureRMResourceGroup –Name AdatumRG –Location centralus

2. Create a DNS zone:

New-AzureRmDnsZone -Name adatum.com -ResourceGroupName AdatumRG

3. Retrieve the SOA and the NS record for the zone:

Get-AzureRmDnsRecordSet -ZoneName adatum.com -ResourceGroupName AdatumRG

Additional Reading: For information regarding using Azure CLI to manage DNS zones,
refer to: “How to manage DNS Zones in Azure DNS using the Azure CLI 2.0” at:
https://aka.ms/qbejgi
MCT USE ONLY. STUDENT USE PROHIBITED
2-22 Implementing and managing Azure networking

Additional Reading: For information regarding using the Azure portal to manage DNS
zones, refer to: “How to manage DNS Zones in the Azure portal” at: https://aka.ms/uh0eth

After you create the zone, you can manage all common DNS record types, such as A, AAAA, CNAME, MX,
NS, SOA, SRV and TXT. The following table describes the function of each type of record.

Record type Full name Function

A (IPv4) Address Maps a host name such as www.adatum.com to an


AAAA (IPv6) IP address, such as 131.107.10.10.

CNAME Canonical name Assigns a custom name, such as ftp.adatum.com,


to a host record, such as host1.adatum.com.

MX Mail exchange Points to the host that accepts email for the
domain. MX records must point to an A record,
and not to a CNAME record.

NS Name server Contains the name of a server hosting a copy of


the DNS zone.

SOA Start of Authority Provides information about the writable copy of


the DNS zone, including its location and version
number.

SRV Service Points to hosts that are providing specific services,


such as the Session Initiation Protocol (SIP) or
Active Directory Domain Services (AD DS).

TXT Text Contains custom text.

Records in Azure DNS are created as a record set, which is a collection of DNS records with the same name
and same type. Creating a record set that contains resource records with specific values in the Azure DNS
zone is a two-step process:
1. Create a record set by using the command New-AzureDnsRecordSet with the values for record
type, zone name, resource group, and Time-to-Live (TTL). For example, the following commands
create a record set for the relative name www in the zone adatum.com, and with the TTL value of 60
seconds. The output of the command is stored in the variable $AdatumRs:

$AdatumRs = New-AzureRmDnsRecordSet -Name "www" -RecordType "A" -ZoneName


"adatum.com" -ResourceGroupName "AdatumRG" -Ttl 60

2. Add the value (record) to the record set by using the command Add-AzureDnsRecordConfig, which
specifies the record that will be added to the record set. For example, the following command adds
the value 110.15.15.110 to the record set variable $AdatumRs, which contains the www.adatum.com
record that you created in the previous step:

Add-AzureRmDnsRecordConfig -RecordSet $AdatumRs -Ipv4Address 110.15.15.110


MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-23

Check Your Knowledge


Question

You want to implement load balancing across Azure VMs with support for SSL offloading. What
Azure networking component should you use?

Select the correct answer.

Azure Standard Load Balancer

Forced tunneling

Azure Basic Load Balancer

Azure Application Gateway

Azure Traffic Manager


MCT USE ONLY. STUDENT USE PROHIBITED
2-24 Implementing and managing Azure networking

Lesson 2
Implementing and managing virtual networks
Azure virtual networks constitute the core component of Azure networking. They represent customer
networks in the cloud. You follow similar principles when designing them as when designing on-premises
networks. Choosing the right address space in the planning stage is critical, especially if you intend to
integrate Azure networks with on-premises networks. In this lesson, you will review how to create virtual
networks, and how to manage them.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe how to plan Azure virtual networks.

• Create and configure virtual networks by using the Azure portal.

• Create and configure virtual networks by using Azure PowerShell.


• Create and configure virtual networks by using Azure CLI.

• Create and configure virtual networks by using deployment templates.

Planning for Azure virtual networks


You control the private IP addresses that are
assigned to Azure VMs within an Azure virtual
network by specifying an IP addressing scheme.
Planning an IP addressing scheme within an Azure
virtual network is similar to planning an on-
premises IP addressing scheme. You often use the
same ranges, following the same set of rules.
However, some considerations are unique to
Azure virtual networks.

Selecting private address spaces


As mentioned in the previous lesson, you can use
both private and public IP address spaces for
defining the IP addresses that will be used in Azure virtual networks. RFC 1918 defines three standard
private address spaces:
• 10.0.0.0/8. Includes all addresses from 10.0.0.1 to 10.0.0.255

• 172.16.0.0/12. Includes all addresses from 172.16.0.1 to 172.31.255.255

• 192.168.0.0/16. Includes all addresses from 192.168.0.1 to 192.168.255.255

When you assign an address space for a virtual network, you typically specify a smaller range within one
of the private address spaces. For example, if you specify the address space 10.1.1.0/24, it means that only
addresses from 10.1.1.1 to 10.1.1.255 will be allocated to your virtual network. You can allocate multiple,
non-adjacent IP address spaces to the same virtual network. You can also add one or more IP address
spaces to an existing virtual network, if that virtual network is not part of a VNet Peering configuration.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-25

If you plan to connect to another virtual network or an on-premises network, you must ensure that their
IP address spaces do not overlap. Always consider using an IP address space that is not in use in your
organization, whether on-premises or in other virtual networks. Even if you initially plan for an isolated
virtual network, you might need to establish connectivity later. If there is any overlap in address spaces,
you will have to recreate the virtual network.

Note: At the time of authoring this course, most Azure virtual network–related resources—
such as internal load balancers, user-defined routes, or NSGs—are not IPv6-capable. Azure
provides IPv6 support for internet-facing Azure Load Balancer. For this reason, any subsequent
references to IP imply the use of IPv4, unless explicitly stated otherwise.

Additional Reading: For more information about Azure support for IPv6 in its internet-
facing Azure load balancer, refer to: “Overview of IPv6 for Azure Load Balancer” at:
https://aka.ms/p75q4e

Choosing subnets
To attach Azure VMs to a virtual network, you must configure the network by dividing its IP address space
into one or more subnets. The range you specify for a subnet must be contained entirely within the virtual
network’s address space. Within each subnet, the first three IP addresses and the last IP address are
reserved, and you cannot assign them to network adapters of Azure VMs. Effectively, the smallest subnet
that you can implement in Azure has the 29-bit subnet mask, which leaves you with three usable IP
addresses.

IP addresses within a subnet are allocated sequentially in the order in which you provision or bring online
Azure VMs on that subnet. For example, the first Azure VM that you deployed into the subnet
192.168.0.0/24 would, by default, have the IP address of 192.168.0.4. As mentioned earlier, you can
change this behavior by assigning a static IP address to the network adapter of that Azure VM.

Note: It is relatively easy to move Azure VMs across subnets within the same virtual
network. However, it is not possible to move an Azure VM across subnets on different virtual
networks. If you must attach an Azure VM to a subnet in another virtual network, you must
delete it while preserving its disks, and then redeploy it to the target network by using the
existing disks.

Using the Azure portal to create virtual networks


To create a virtual network in the Azure portal,
perform the following procedure:

1. Sign in into the Azure portal.


2. In the hub menu, click + Create a resource,
select Networking, and then click Virtual
network.
3. On the Create virtual network blade, in the
Name text box, type a descriptive name for
the virtual network.
MCT USE ONLY. STUDENT USE PROHIBITED
2-26 Implementing and managing Azure networking

4. In the Address space text box, specify the IP address space by using Classless Interdomain Routing
(CIDR) notation.

5. In the Subscription drop-down box, select the Azure subscription in which you want to create a
virtual network.

6. In the Resource group box, either create a new resource group or select an existing one.

7. In the Location drop-down box, select the Azure region in which you want to create a virtual
network.

8. In the Subnet section, in the Name text box, type a name for the first subnet on the virtual network.

9. In the Subnet section, in the Address range box, choose the IP address range for the subnet by
using CIDR notation.

10. Disable or enable Service endpoints for the subnet.

11. Click Create.

After the virtual network provisioning is complete, you can configure it further by creating additional
subnets or setting up a DNS server address.

To modify virtual network setting in the Azure portal, perform the following procedure:

1. Select your newly created virtual network. On the virtual network blade, you can configure additional
virtual network properties.

2. Click Properties and identify the Resources ID of the virtual network, the Azure region, the
subscription name, and the subscription ID.

3. Click Address space and provide additional IP address spaces that you want to include in that virtual
network.

4. To create an additional subnet, click Subnets.

5. To add a new subnet, on the Subnets blade, click +Subnet.


6. On the Add Subnet blade, in the Name text box, type a descriptive name. In the Address range
(CIDR block) box, type the IP address range for the subnet by using CIDR notation, and then, to
create the subnet, click OK.

7. To configure DNS server settings for the virtual network, click DNS servers.

8. On the DNS servers blade, click Custom. In the Add DNS server text box, type the IP address of
your custom DNS server, and then click Save.

9. To modify the Role Based Access Control settings for this resource, click the Access control (IAM)
link.

10. To add a custom tag to the virtual network, click the Tags link.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-27

Using Azure PowerShell to create virtual networks


You can use the Azure PowerShell module to
create Azure virtual networks by following this
procedure:

1. Start Microsoft Azure PowerShell and sign


in to your subscription:

Login-AzureRmAccount

2. If there are multiple subscriptions associated


with your account, select the target
subscription in which you are going to create
a virtual network:

Set-AzureRmContext –SubscriptionName <Name of your subscription>

3. Create a new resource group:

New-AzureRMResourceGroup –Name AdatumRG –Location centralus

4. Create a new VNet (in this example, its name is set to AdatumVnet), assign an IP address space (in
this example, set to 192.168.0.0/16) and the target Azure region, and then store a reference to the
new virtual network in the $vnet variable:

$vnet = New-AzureRMVirtualNetwork –ResourceGroupName AdatumRG –Name AdatumVnet –


AddressPrefix 192.168.0.0/16 –Location centralus

5. Add a subnet to the new virtual network:

Add-AzureRmVirtualNetworkSubnetConfig -Name FrontEnd -VirtualNetwork $vnet -


AddressPrefix 192.168.0.0/24

6. Update the configuration in the virtual network:

Set-AzureRMVirtualNetwork –VirtualNetwork $vnet

Using Azure CLI to create virtual networks


You can use the Azure CLI to create Azure virtual
networks. To use Azure CLI to create a virtual
network, perform the following steps.

1. Start an Azure CLI session and sign in to your


subscription:

az login

2. If there are multiple subscriptions associated


with your account, select the target
subscription in which you are going to create
a virtual network:

az account set -–subscription <Name of your subscription>


MCT USE ONLY. STUDENT USE PROHIBITED
2-28 Implementing and managing Azure networking

3. Create a new resource group:

az group create --name AdatumRG --location centralus

4. Create a new VNet (in this example, its name is set to AdatumVnet), assign an IP address space (in
this example, 192.168.0.0/16), specify the target Azure region, and then add a subnet to the new
virtual network:

az network vnet create \


--name AdatumVNet \
--resource-group AdatumRG \
--location centralus \
--address-prefix 192.168.0.0/16 \
--subnet-name FrontEnd \
--subnet-prefix 192.168.0.0/24

Using Azure PowerShell to create a virtual network based on an Azure


Resource Manager template
You can download an existing Azure Resource
Manager template for creating a virtual network
from GitHub at https://aka.ms/iih1md. In addition
to the sample templates, you will find Azure-
related APIs, software development kits (SDKs),
and other open source projects.
To create a virtual network based on a template,
you can identify the template parameters and
specify their values interactively during
deployment. Alternatively, you can store these
values in a parameters file and reference this file
during deployment. It is possible to initiate a
template-based deployment by using Azure PowerShell, Azure CLI, or Visual Studio, or directly from the
Azure portal.
The following procedure demonstrates how to use Azure PowerShell to deploy an Azure virtual network
defined in the Azure Quickstart template named Virtual Network with two Subnets available from
GitHub at http://aka.ms/Mt32e4:

1. Download the azuredeploy.json file in RAW format, and then open it in any text editor. You should
consider using an editor that supports JSON editing features. Such capabilities are automatically
available in Visual Studio and Visual Studio Code.

2. Identify the parameters to which you will assign custom values during deployment:

o vnetName - the name of the virtual network.

o vnetAddressPrefix - the IP address space of the virtual network in CIDR notation.

o subnet1Name - the name of the first subnet.


o subnet1Prefix - the IP address range of the first subnet in CIDR notation.

o subnet2Name - the name of the second subnet.

o subnet2Prefix - the IP address range of the second subnet in CIDR notation.

o location - the Azure region where the virtual network will be created.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-29

3. Navigate to the resources section to identify the resources created in Azure Resource Manager and
their properties, including:

o Type - the resource type Microsoft.Network/virtualNetworks.

o Name - the name of the resource.

o Location - the Azure region where the resource will be created.


4. Download the azuredeploy-parameters.json file in RAW format, and then open it in a text editor.

5. Set the parameters to your custom values and save the changes. For example:

Modify the values for the parameters of the Azure Resource Manager template that you will use for
creation of the virtual network.

Modify the azuredepeploy-parameters.json file


{
"location": {
"value": "Central US"
},
"vnetName": {
"value": "AdatumVNet"
},
"vnetAddressPrefix": {
"value": "10.0.0.0/16"
},
"subnet1Name": {
"value": "FrontEnd"
},
"subnet1Prefix": {
"value": "10.0.0.0/24"
},
"subnet2Name": {
"value": "BackEnd"
},
"subnet2Prefix": {
"value": "10.0.1.0/24"
}
}

6. Start Microsoft Azure PowerShell and sign in to your subscription:

Login-AzureRMAccount

7. If there are multiple subscriptions associated with your user account, select the target subscription in
which you are going to create a virtual network:

Set-AzureRmContext –SubscriptionName <Name of your subscription>

8. Create a new resource group:

New-AzureRMResourceGroup –Name AdatumRG –Location centralus

9. Run the New-AzureRmResourceGroupDeployment cmdlet to deploy the new virtual network by


using the template and parameter files that you downloaded and modified:

New-AzureRmResourceGroupDeployment -Name AdatumVNetDeployment `


-ResourceGroupName AdatumRG `
-TemplateFile .\azuredeploy.json `
-TemplateParameterFile .\azuredeploy-parameters.json
MCT USE ONLY. STUDENT USE PROHIBITED
2-30 Implementing and managing Azure networking

Note: Note that you could simplify this particular deployment by clicking Deploy to Azure
on the Virtual Network with two Subnets GitHub page. The procedure described in this topic
illustrates an approach that you can follow when using custom templates, which are not available
from GitHub.

Demonstration: Deploying a virtual network by using an Azure Resource


Manager template
In this demonstration, you will see how to implement a VNet by using an Azure Resource Manager
template.

Check Your Knowledge


Question

Which Azure components support IPv6 connectivity?

Select the correct answer.

Azure internal load balancer

Azure internet-facing load balancer

Azure Traffic Manager

Network Security Groups

User-defined routes
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-31

Lab A: Using a deployment template and Azure


PowerShell to implement Azure virtual networks
Scenario
Adatum Corporation plans to create several virtual networks in their Azure subscription. They will all
reside in the same Azure region. You want to test the deployment of Azure virtual networks by using both
imperative and declarative methods.

Objectives
After completing this lab, you will be able to:

• Create a virtual network by using deployment templates.

• Create a virtual network by using PowerShell.

• Create a virtual network by using Azure CLI.

Note: The lab steps for this course change frequently due to updates to Microsoft Azure.
Microsoft Learning updates the lab steps frequently, so they are not available in this manual. Your
instructor will provide you with the lab documentation.

Lab Setup
Estimated Time: 30 minutes
Virtual machine: 20533E-MIA-CL1

User name: Student

Password: Pa55w.rd
Before starting this lab, ensure that you have performed the Preparing the Environment demonstration
tasks at the beginning of the first lesson in this module, and that the setup script has completed.

Question: What are the methods that you can use to create an Azure virtual network?
MCT USE ONLY. STUDENT USE PROHIBITED
2-32 Implementing and managing Azure networking

Lesson 3
Configuring an Azure virtual network
An Azure virtual network has many similarities with on-premises infrastructure. You can control name
resolution by deploying your own DNS server and define routes to control network traffic flow. NSGs and
support for forced tunneling help you address your organization’s security needs.

Lesson Objectives
After completing this lesson, you will be able to:

• Configure name resolution in an Azure virtual network.

• Configure user-defined routes.


• Configure forced tunneling.

• Configure NSGs.

Configuring name resolution in an Azure virtual network


Name resolution is the process by which a
computer name is resolved to an IP address. It
eliminates the need to reference IP addresses when
accessing remote computers. Azure automatically
provides a DNS-based name resolution service. This
service enables Azure VMs to communicate via their
names. However, some scenarios might require a
custom name resolution, including the following:

• Name resolution between Azure VMs in a


virtual network and on-premises computers in
a hybrid connectivity scenario.
• Azure VMs in different virtual networks
connected via VNet-to-VNet or VNet Peering.

• Reverse lookup of private IP addresses.


• Name resolution of private domain names.

Note: Providing DNS name resolution in an Active Directory domain environment hosted
on Azure VMs requires a custom DNS server.

Azure-provided DNS name resolution


Azure-provided DNS name resolution does not require any custom configuration and is highly available
by design. By default, the DNS suffix is the same across all VMs in the same virtual network. This means
that it is sufficient to specify host names to resolve them to the corresponding IP addresses.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-33

Name resolution by using a custom DNS server


If you are planning to use a custom DNS server for name resolution on an Azure virtual network, you must
ensure that all Azure VMs on that virtual network can reach it to register their names and resolve the
names of other VMs. You can either deploy DNS on an Azure VM or use a DNS server residing outside
Azure. Your DNS server should meet the following requirements:
• Supports dynamic registration of resource records in DNS.

• Has record scavenging switched off. DHCP leases in an Azure virtual network are infinite, so record
scavenging might remove records that have not been renewed but are still valid.

• Has DNS recursion enabled.

• Is accessible on TCP/UDP port 53 from all VMs.

Configuring user-defined routes


Routing in Azure virtual networks is handled
automatically by the underlying virtualization
platform. As a result, all VMs connected to the
same virtual network can, by default,
communicate with each other without requiring
configuration of a default gateway for each VM.
Similarly, each VM, by default, can reach the
internet, any directly connected Azure virtual
networks, and on-premises networks, if you have
implemented hybrid connectivity. Azure
accomplishes this by assigning a set of system
routes to every virtual network subnet that you
create. Those system routes contain the following rules:

• Local virtual network rule. This rule facilitates communication between VMs in the same virtual
network.

• On-premises rule. This rule is created when you configure a hybrid connection from your on-premises
environment. The rule directs outbound traffic targeting your on-premises IP address space via a VPN
gateway.

• Internet rule. This rule represents the default route to the internet via a platform-managed internet
gateway.
Routes rely on the following information to determine how to direct traffic flow:

• Address prefix. Specifies the destination IP address range in CIDR notation.


• Next hop type. Specifies the next network node where the packet will be forwarded. Possible
destinations are:

o Virtual Network. Destination is accessible directly within the same virtual network.

o Virtual Network Gateway. Destination is on another virtual network or an on-premises network,


reachable via a VPN gateway.

o Internet. Destination is on the public internet, reachable via the internet gateway that the Azure
infrastructure manages.
MCT USE ONLY. STUDENT USE PROHIBITED
2-34 Implementing and managing Azure networking

o Virtual Appliance. Destination is accessible via an intermediary virtual machine residing on the
local virtual network.

o None. Destination is not accessible. This configuration is useful for preventing the delivery of
packets targeting a particular IP address space.

• Next hop value. Applies to the Virtual Appliance next hop type and contains the IP address of a
virtual appliance, to which packets should be forwarded. The virtual appliance must reside on a
different subnet than the one to which the route applies. In scenarios that involve VNet Peering, this
subnet can reside on the peered virtual network.

Note: You must enable the IP Forwarding setting of the network adapters of the Azure
VM hosting a virtual appliance. The platform requires that IP Forwarding is enabled to allow
forwarding of network packets to an Azure VM when the packets’ destination does not match
that VM’s IP address.

You have the option of using these routes to customize the default network traffic flow. For example, your
network policy might state that all internet-bound traffic should pass through an on-premises system for
auditing and packet inspection. In such a case, you would need to configure user-defined routes that
implement forced tunneling. Similarly, you might need to implement an Azure-resident virtual appliance
for packet inspection.

After you create a user-defined route, you must assign it to one or more subnets. The routes will apply to
traffic leaving these subnets. You can assign a route to the gateway subnet, which contains the virtual
gateway. This way, you can control routing of ingress traffic originating from other virtual or on-premises
networks.

You can use the Azure PowerShell, Azure CLI, or Azure Resource Manager templates to create user-
defined routes and to assign them to virtual network subnets. For example, you want to inspect all traffic
that originates from the FrontEnd subnet and targets the BackEnd subnet in the AdatumVNet virtual
network. To accomplish this, you define a route that directs the relevant traffic from FrontEnd to the
virtual appliance named NVA1, which you will deploy to the NVA subnet on the same virtual network.
The following procedure provides an example of using Azure PowerShell to implement this scenario:

1. Start Microsoft Azure PowerShell, and sign in to your subscription:

Login-AzureRMAccount

2. If there are multiple subscriptions associated with your account, select the target subscription in
which you are going to create the virtual network and configure user-defined routes:

Set-AzureRMContext –SubscriptionName <Name of your subscription>

3. Create a new resource group:

New-AzureRMResourceGroup –Name AdatumRG –Location centralus

4. Create a new virtual network named AdatumVNet with the address space 192.168.0.0/16 and store
a reference to it in a Windows PowerShell variable $vnet:

$vnet = New-AzureRMVirtualNetwork –ResourceGroupName AdatumRG –Name AdatumVNet –


AddressPrefix 192.168.0.0/16 –Location centralus
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-35

5. Add three subnets to the new virtual network:

Add-AzureRmVirtualNetworkSubnetConfig -Name FrontEnd -VirtualNetwork $vnet -


AddressPrefix 192.168.0.0/24
Add-AzureRmVirtualNetworkSubnetConfig -Name BackEnd -VirtualNetwork $vnet -
AddressPrefix 192.168.1.0/24
Add-AzureRmVirtualNetworkSubnetConfig -Name NVA -VirtualNetwork $vnet -AddressPrefix
192.168.100.0/24

6. Update the configuration in the virtual network:

Set-AzureRMVirtualNetwork –VirtualNetwork $vnet

7. Create a route that will route all the traffic from FrontEnd (192.168.0.0/24) to BackEnd
(192.168.1.0/24) via the virtual appliance (192.168.100.4):

$route1 = New-AzureRmRouteConfig –Name AdatumRoutetoNVA1 `


-AddressPrefix 192.168.1.0/24 -NextHopType VirtualAppliance `
-NextHopIpAddress 192.168.100.4

8. Create a route table named Adatum-RTNVA1 that contains the previously created route:

$routeTable1 = New-AzureRmRouteTable -ResourceGroupName AdatumRG `


-Location centralus -Name Adatum-RTNVA1 -Route $route1

9. Associate the previously created route table with the FrontEnd subnet:

Set-AzureRmVirtualNetworkSubnetConfig -VirtualNetwork $vnet -Name FrontEnd `


-AddressPrefix 192.168.0.0/24 -RouteTable $routeTable1

10. Update the configuration of the virtual network:

Set-AzureRMVirtualNetwork –VirtualNetwork $vnet

11. Use a variable to store a reference to the network adapter of the virtual appliance. The name of the
network adapter in this scenario is NICNVA1:

$nicNVA1 = Get-AzureRmNetworkInterface -ResourceGroupName AdatumRG -Name NICNVA1

12. Enable IP forwarding on NICNVA1:

$nicNVA1.EnableIPForwarding = 1

13. Update the network adapter settings:

Set-AzureRmNetworkInterface –NetworkInterface $nicNVA1

Additional Reading: For the equivalent steps when using Azure CLI, refer to: “Create User-
Defined Routes (UDR) using the Azure CLI 2.0” at: https://aka.ms/ic43ns

Regardless of the provisioning method, after deploying the Azure networking components, you must
provision the NVA1 Azure VM and configure routing within its operating system. The configuration
specifics are dependent on which product is providing the routing functionality. You can use a product
available from the Azure Marketplace, or you can use the operating system routing functionality of a
Windows or Linux Azure VM.
MCT USE ONLY. STUDENT USE PROHIBITED
2-36 Implementing and managing Azure networking

Configuring forced tunneling


Many companies enforce packet inspection and
auditing policies for traffic that crosses their
internal network boundary. This creates a
challenge when they extend their networks to
Azure, because VMs that reside on an Azure
virtual network have, by default, a direct route to
the internet.

Forced tunneling redirects internet-bound traffic


back to the company’s on-premises infrastructure.
With forced tunneling, you can selectively choose
virtual network subnets from which the traffic
should be routed back to your on-premises
network. At that point, you can apply the same packet inspection and auditing policies to the redirected
traffic as you apply to the traffic originating from your on-premises networks.
You configure forced tunneling by creating a default route for selected subnets in the virtual network that
directs outbound traffic through the VPN gateway residing within the virtual network. For example, if you
plan to use forced tunneling for the traffic that originates from the subnet BackEnd in the virtual network
AdatumVNet, perform the following steps by using Azure PowerShell:
1. Start Microsoft Azure PowerShell, and then sign in to your subscription:

Login-AzureRMAccount

2. Select the subscription in which you are going to create the virtual network and configure forced
tunneling:

Set-AzureRmContext –SubscriptionName <Name of your subscription>

3. Create a new resource group:

New-AzureRMResourceGroup –Name AdatumRG –Location centralus

4. Create a new virtual network named AdatumVNet with the address space 192.168.0.0/16 and store
a reference to it in a Windows PowerShell variable $vnet:

$vnet = New-AzureRMVirtualNetwork –ResourceGroupName AdatumRG –Name AdatumVNet –


AddressPrefix 192.168.0.0/16 –Location centralus

5. Add subnets to the new virtual network:

Add-AzureRmVirtualNetworkSubnetConfig -Name FrontEnd -VirtualNetwork $vnet -


AddressPrefix 192.168.0.0/24
Add-AzureRmVirtualNetworkSubnetConfig -Name BackEnd -VirtualNetwork $vnet -
AddressPrefix 192.168.1.0/24

6. Add a gateway subnet to the new virtual network:

Add-AzureRmVirtualNetworkSubnetConfig -Name GatewaySubnet -VirtualNetwork $vnet -


AddressPrefix 192.168.200.0/28

7. Update the configuration of the virtual network:

Set-AzureRMVirtualNetwork –VirtualNetwork $vnet


MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-37

8. Create the object representing your on-premises VPN gateway and store a reference to it in the
variable $GW (this example assumes that the gateway IP address is 111.111.111.111 and the on-
premises IP address space is 10.0.0.0/16):

$AdatumLocalGW = New-AzureRmLocalNetworkGateway -Name "AdatumLocalGW" -


ResourceGroupName "AdatumRG" -Location "centralus" -GatewayIpAddress
"111.111.111.111" -AddressPrefix "10.0.0.0/16"

9. Create a route that will send all the traffic through the VPN gateway:

$route = New-AzureRmRouteConfig -Name DefaultRoute `


-AddressPrefix 0.0.0.0/0 -NextHopType VirtualNetworkGateway

10. Create a route table named Adatum-FT that contains the previously created route:

$routeTable = New-AzureRmRouteTable -ResourceGroupName AdatumRG -Location centralus `


-Name Adatum-FT -Route $route

11. Associate the route table to the BackEnd subnet:

Set-AzureRmVirtualNetworkSubnetConfig -VirtualNetwork $vnet -Name BackEnd `


-AddressPrefix 192.168.1.0/24 -RouteTable $routeTable

12. Update the configuration of the virtual network:

Set-AzureRMVirtualNetwork –VirtualNetwork $vnet

13. Create a public IP address resource in the resource group AdatumRG:

$pip = New-AzureRmPublicIpAddress -Name "GatewayIP" -ResourceGroupName "AdatumRG" -


Location "centralus" -AllocationMethod Dynamic

14. Create the IP configuration of the Azure VPN gateway:

$gwsubnet = Get-AzureRmVirtualNetworkSubnetConfig -Name "GatewaySubnet" -


VirtualNetwork $vnet
$ipconfig = New-AzureRmVirtualNetworkGatewayIpConfig -Name "gwIpConfig" -SubnetId
$gwsubnet.Id -PublicIpAddressId $pip.Id

15. Create an Azure VPN gateway named AdatumGW and allocate a dynamic public IP address to it. In
the previous steps, you stored IP configurations of the Azure VPN gateway in the $ipconfig variable
and a reference to the on-premises VPN gateway in the $AdatumLocalGW variable:

$AdatumGW = New-AzureRmVirtualNetworkGateway -Name "AdatumGW" -ResourceGroupName


"AdatumRG" -Location "centralus" -IpConfigurations $ipconfig -GatewayType Vpn -
VpnType RouteBased -GatewayDefaultSite $AdatumLocalGW -EnableBgp $false –GatewaySku
VnpGw1

16. Establish the site-to-site VPN connection between AdatumGW and local gateway AdatumLocalGW
by using the preshared key:

New-AzureRmVirtualNetworkGatewayConnection -Name "Connection1" -ResourceGroupName


"AdatumRG" -Location "centralus” –VirtualNetworkGateway1 $AdatumGW -
LocalNetworkGateway2 $AdatumLocalGW -ConnectionType IPsec -SharedKey "preSharedKey"
MCT USE ONLY. STUDENT USE PROHIBITED
2-38 Implementing and managing Azure networking

Note: The configuration described in this topic applies to forced tunneling in scenarios that
do not involve BGP route exchange. BGP is a method of dynamically exchanging routing
configuration when using ExpressRoute for hybrid connectivity. It is possible to implement BGP in
combination with route-based VPN gateways.

Additional Reading: For more information regarding using BGP with Azure VPN gateways,
refer to: “Overview of BGP with Azure VPN Gateways” at: https://aka.ms/rsmh1y

Configuring network security groups


NSGs provide network-based protection for Azure
VMs. They allow you to control inbound and
outbound traffic on a network adapter or at a
subnet level. An NSG contains inbound and
outbound rules that specify whether to allow or
deny traffic. This determination depends on up to
five different criteria, including a range of source
IP addresses, a range of source ports, a range of
destination IP addresses, a range of destination
ports, and a protocol. In the Azure portal,
configuration of an NSG rule includes the
following properties:
• Name. This is a unique identifier for the rule.

• Direction. Direction specifies whether the traffic is inbound or outbound.

• Priority. If multiple rules match the traffic, rules with higher priority take precedence.
• Access. Access specifies whether the traffic matching the rule settings is allowed or denied.

• Source. This identifies from where traffic originates. You can choose from three options for this
property: a range of IP addresses in CIDR notation, the Any setting that denotes all IP addresses, or
the Service Tag setting designating networks of a particular predefined type. This eliminates the
need to specify a potentially large number of CIDR blocks.

Note: At the time of authoring this course, NSG rules support the following service tags:

• Internet. This tag represents all internet IP addresses, including the Azure public IP address ranges.

• VirtualNetwork. This tag represents all IP addresses included in the IP address space of the virtual
network. It also includes the IP address space of your on-premises networks or other virtual networks
connected to the local virtual network.

• AzureLoadBalancer. This tag represents all IP addresses from which Azure load balancer health probes
originate.

• AzureTrafficManager. This tag represents all IP addresses from which Azure Traffic Manager health
probes originate.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-39

• Storage. This tag is for IP addresses representing public endpoints of the Azure Storage service.

• SQL. This tag is for IP addresses representing public endpoints of the Azure SQL Database service.
Storage and SQL service tags allow you to control outbound traffic from Azure VMs to the
respective PaaS services via service endpoints. For both types of tags, you can designate
individual regions for which traffic should be allowed or denied.

• Source port range. This specifies source ports by using either a single port number from 1-65535, a
range of ports (200-400), or the asterisk (*) wildcard character that denotes all ports.

• Destination. This identifies the traffic destination. Just like the Source property, it can take the form of
a range of IP addresses in CIDR notation, Any that denotes all IP addresses, or a ServiceTag
designating networks of a particular predefined type.
• Destination port range. This specifies destination ports by using either a single port number from 1-
65535, a range of ports (200-400), or the asterisk (*) wildcard character that denotes all ports.

• Protocol. Protocol specifies the network protocol used by the incoming or outgoing traffic. You can
set it to UDP, TCP, or the asterisk (*) wildcard character. The wildcard includes ICMP.

Note: To specify a single IP address as the source or destination of a rule, use the /32
subnet mask in the CIDR notation.

Note: The Azure portal further simplifies rule configuration by providing you with the Basic
view of each rule’s properties. In the Basic view, you do not specify the source port range; instead
you select a predefined protocol, such as HTTP, HTTPS, FTP, or SSH. If you must specify a custom
port configuration, you can switch to the Advanced view.

Note: You can also implement Azure NSGs based on custom application security groups,
representing a collection of Azure VMs that share the same network connectivity requirements.
After you create an application security group, you can reference it when defining an NSG rule as
you would use service tags. At the time of authoring this content, application security groups are
in preview.

Additional Reading: For information regarding implementing application security groups,


refer to: “Filter network traffic with network and application security groups (Preview)” at:
https://aka.ms/Kirqff

There are predefined default rules for inbound and outbound traffic. You cannot delete these rules, but
you can override them because they have the lowest priority. Default rules allow all inbound and
outbound traffic within a virtual network, allow outbound traffic to the internet, and allow inbound traffic
that Azure load-balancer health probes use to determine the state of load-balanced VMs. There is also a
default rule with the lowest priority in both inbound and outbound sets of rules that denies all network
communication.

Planning NSGs
You can design NSGs to implement protected subnets that restrict inbound or outbound traffic to a
specific set of IP addresses. You can also assign NSGs to individual network adapters, allowing you to
configure different security groups applicable to an individual network adapter of a multi–network
adapter VM.
MCT USE ONLY. STUDENT USE PROHIBITED
2-40 Implementing and managing Azure networking

You can associate the same NSG with multiple subnets and network adapters. This might help you avoid
reaching the restrictions on the number of NSGs. By default, you can create 100 NSGs per region per
subscription. You can raise this limit to 5000 by contacting Azure support.

There are additional considerations to take into account when implementing NSGs:

• You can apply only a single NSG to a VM or network adapter.

• By default, you can have up to 200 rules in a single NSG. You can raise this limit to 1000 by
contacting Azure support.

Creating NSGs and configuring rules


You can use the Azure portal, Azure PowerShell, Azure CLI, or Azure Resource Manager templates to
create and modify NSGs. In Azure PowerShell, to create a new NSG named Adatum-RG, you use the
New-AzureRMNetworkSecurityGroup command. Azure CLI provides the az network nsg create
command that offers the equivalent functionality. The Azure portal offers a convenient, intuitive interface
for creating new NSGs and customizing existing NSGs. For example, to create a custom rule for an existing
NSG in the Azure portal, you can use the following procedure:

1. In the hub menu of the Azure portal, click All services, and then select Network security groups.

2. From the list in the Network security groups blade, select the NSG that you plan to modify.

3. Click either Inbound security rules or Outbound security rules, depending on the type of rule you
want to modify.

4. In the resulting blade, click Add.

5. In the Add inbound security rule blade or Add outbound security rule blade, depending on your
earlier choice, click Advanced to configure the following properties, and then click OK:

o Name. Use a descriptive name.

o Priority. Specify a value to identify the priority of the rule.

o Source. Select Any, CIDR Block, or Service Tag.

o Protocol. Select either Any or the TCP or UDP protocol.

o Source port range. Specify either a single port or a range of ports to match the rule.
o Destination. Select Any, CIDR Block, or Service Tag.

o Destination port range. Specify either a single port or a range of ports to match the rule.

o Action. Specify either the Allow or Deny action for the traffic that matches the properties of the
rule.

Note: You can simplify the configuration of network security rule by selecting the Basic
configuration option, rather than Advanced. This allows you to choose from one of the pre-
defined entries in the Service drop down list, rather than specifying the value of the Protocol
and port ranges.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-41

Demonstration: Configuring network security groups


In this demonstration, you will see how to create a network security group and associate it with a subnet
of a virtual network.

Check Your Knowledge


Question

Which of the following scenarios require the use of a custom DNS server to provide name
resolution among all networked computers?

Select the correct answer.

Name resolution of internet names

Azure VMs in the same virtual network

Hybrid connection via ExpressRoute

VMs residing in two different virtual networks connected via VNet Peering

Reverse lookup of private IP addresses within the same virtual network


MCT USE ONLY. STUDENT USE PROHIBITED
2-42 Implementing and managing Azure networking

Lesson 4
Configuring virtual network connectivity
In many cases, you can think of Azure as an extension of your datacenter in the cloud. However,
extending your existing environment to Azure relies on the ability to provide network connectivity
between your on-premises environment and Azure virtual networks, as well as between Azure virtual
networks. In this lesson, you will learn how to establish such connectivity.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe the options for virtual network connectivity.

• Configure a point-to-site VPN.

• Configure site-to-site VPNs.

• Configure VNet Peering.


• Connect virtual networks deployed by using the Azure Resource Manager deployment model.

Azure virtual network connectivity options


As briefly described in the first lesson of this
module, to connect to an Azure virtual network,
you can use one of the following methods:

Point-to-site
Point-to-site VPN employs SSTP to allow direct
connectivity to an Azure virtual network from
individual computers running Windows, Mac OS
X, and Linux. To establish connectivity, VPN clients
must use one the following protocols:

• SSTP. This connectivity option is available on


all currently supported versions of the
Windows operating system.

• IKEv2. This connectivity option is available from Mac OS X version 10.11 and newer, and from Linux
via strongSwan 5.5.1.

Point-to-site VPN supports two authentication mechanisms:

• Certificate-based authentication. You can either use an internal or public certification authority (CA)
or generate self-signed certificates. You must upload the public key of the root (representing your
public key infrastructure [PKI] deployment or a self-signed one) to Azure and associate it with the
target virtual network containing the VPN gateway. You must also generate client certificates
(typically one per user), either by relying on the same CA that you requested the root certificate from
or by generating self-signed client certificates that reference the self-signed root certificate. Install the
client certificates with their respective private keys in the private certificate store on client computers.
Effectively, the VPN tunnel relies on the implicit trust between the client certificates on VPN client
computers and the root certificate uploaded to the Azure VPN gateway.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-43

• RADIUS-based authentication to Active Directory or another RADIUS-capable identity provider. In this


configuration, Azure VPN gateway relays authentication requests and responses between the RADIUS
server and VPN clients.

A point-to-site VPN leverages the VPN capabilities built into the operating system, but it also requires the
installation of a VPN software package. For certificate-based authentication clients, you can download the
installer package directly from the Azure portal. For RADIUS-based authentication clients, use the New-
AzureRmVpnClientConfiguration Azure PowerShell cmdlet to generate the package files. They will
include the Windows 64-bit and 32-bit installer packages, the mobileconfig file for MAC OS X devices,
and generic information that you can use to configure other VPN clients, such as strongSwan on Linux.

From the Azure infrastructure standpoint, a point-to-site VPN requires a VPN gateway associated with the
target Azure virtual network, just like a site-to-site VPN or ExpressRoute. However, in this case, there is no
need for additional on-premises servers or a private connection. You also need to take into account that
certificate-based authentication involves extra certificate management overhead. In particular, you need
to issue, install, and maintain the validity of client certificates. You should also keep track of the computers
to which you deployed client certificates, as well as their users. This allows you to revoke certificates in
case a computer gets compromised or stolen, or when a user leaves your organization.
When configuring a point-to-site VPN, you will need to designate an IP address range for VPN client
computers. As part of the VPN connection process, a VPN client automatically receives an IP address from
this range. At that point, the VPN client software automatically updates the local route table on the client
computer so that any connection targeting the IP address space of the Azure virtual network is routed via
the VPN connection.

Note: Updates to the local route tables on Windows VPN client computers require local
Administrator privileges.

The total bandwidth available for the point-to-site connections depends on the SKU of the VPN gateway:

• Basic. Up to 100 Mbps


• VpnGw1. Up to 500 Mbps

• VpnGw2. Up to 1 Gbps

• VpnGw3. Up to 1.25 Gbps

Note: The Basic SKU of the VPN gateway does not support RADIUS authentication and
IKEv2 VPN.

All point-to-site VPN clients share that bandwidth, so the user experience depends on the total number of
client computers simultaneously accessing the target virtual network. The VPN gateway enforces the limit
of 128 concurrent connections regardless of the SKU.

Just like with a site-to-site VPN, the cost of a Point-to-Site (P2S) VPN is comprised of two main
components. The easiest-to-estimate part represents the hourly cost of virtual machines hosting the VPN
gateway. This depends on its SKU. In addition, there is a charge for outbound data transfers at standard
data transfer rates, which depend on the volume of data and the zone in which Azure datacenter hosting
the VPN gateway resides. There is no cost associated with inbound data transfers.
MCT USE ONLY. STUDENT USE PROHIBITED
2-44 Implementing and managing Azure networking

Site-to-site
Site-to-site VPNs rely either on static routes or BGP-based dynamic routing to direct traffic between on-
premises networks and Azure virtual networks. When using static routes, the Azure platform generates its
local route table when you create the site-to-site VPN connection based on two pieces of data: the IP
address space that you assigned to the Azure virtual network and the local network, which you define in
the process of setting up the VPN connection. The local network represents the IP address space of your
on-premises networks.

Note: Keep in mind that Azure implements the routing configuration of the Azure virtual
network. For cross-premises connectivity to function, you must also update the on-premises
routing configuration.

Additional Reading: The ability to use BGP in Site-to-Site VPN allows you to implement a
number of previously unsupported scenarios, such as transitive routing between your on-
premises locations and multiple Azure virtual networks as well as multiple tunnels between a
virtual network and an on-premises location with automatic failover between them. For more
information, refer to: “Overview of BGP with Azure VPN Gateways” at: https://aka.ms/rsmh1y

The site-to-site VPN method employs the IPSec protocol with a pre-shared key to provide authentication
between the on-premises VPN gateway and the Azure VPN gateway. The key is an alphanumeric string
between 1 and 128 characters.
From the infrastructure standpoint, in addition to a reliable connection to the internet from your on-
premises network, a site-to-site VPN requires a VPN gateway on each end of the VPN tunnel. On the
Azure side, you provision a VPN gateway as part of creating a site-to-site VPN. For more information
regarding VPN gateway characteristics, refer to the first lesson of this module.

Note: The effective throughput of VPN connections might vary, depending on the
bandwidth of the internet connection and impact of encryption associated with the VPN
functionality.

Details of on-premises site-to-site VPN configuration are device specific. Microsoft offers configuration
instructions for each of the validated VPN devices. Non-validated VPN devices may support site-to-site
VPN, but they require independent testing.

Additional Reading: For a list of VPN devices that Microsoft has validated in partnership
with their vendors, and their configuration instructions, refer to: “About VPN devices for Site-to-
Site VPN Gateway connections” at: http://aka.ms/Frtaeb

There are additional considerations regarding your on-premises infrastructure. In particular, if your VPN
gateway resides on the perimeter network behind a firewall, you must ensure that the following types of
traffic are allowed to pass through for both the inbound and outbound directions:

• IP protocol 50

• UDP port 500

• UDP port 4500


MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-45

Just as with P2S VPN, two main components determine the cost of site-to-site VPNs. The easiest-to-
estimate part is the hourly cost of virtual machines hosting the VPN gateway. This depends on its SKU. In
addition, there is a charge for outbound data transfers at standard data transfer rates, which depend on
the volume of data and the zone in which the Azure datacenter that is hosting the VPN gateway resides.
There is also no cost associated with inbound data transfers.

Additional Reading: For up-to-date site-to-site VPN pricing information, refer to: “VPN
Gateway Pricing” at: http://aka.ms/Y57p7y

There is a 99.9 percent availability Service Level Agreement (SLA) for each VPN gateway. A number of
non-Microsoft vendors of VPN gateway devices support redundant configurations, which increase the
resiliency of the on-premises endpoint of the VPN tunnel.

ExpressRoute
ExpressRoute delivers private, network-layer connectivity between on-premises networks and Microsoft
Cloud, without crossing the internet, in the form of:

• Private peering. This includes connections to Azure virtual machines and Azure cloud services residing
on Azure virtual networks. You can establish connectivity to multiple Azure virtual networks, with up
to 10 virtual networks with the standard ExpressRoute offering and up to 100 virtual networks with
the ExpressRoute Premium add-on.

• Public peering. This includes connections to Azure services not accessible directly via Azure virtual
networks, such as Azure Storage or Azure SQL Database. With public peering, you can ensure that
traffic from on-premises locations to Azure public IP addresses does not cross the internet. It also
delivers predictable performance and latency when connecting to these IP addresses.
• Microsoft peering. This includes connections to the Office 365 and Dynamics 365 services. In addition,
Microsoft peering supports connectivity to Azure public services.

Additional Reading: You can change your current connectivity to Azure public services
from public peering to Microsoft peering. For details regarding this procedure, refer to: “Move a
public peering to Microsoft peering” at: https://aka.ms/Auj3mr. This simplifies and optimizes
routing configuration. However, note that, at the time of authoring this content, Microsoft
peering requires purchasing the ExpressRoute Premium add-on.

Each of these peering arrangements constitutes a separate routing domain, but all of them are
provisioned over the same physical connection. You have the option of combining them into the same
routing domain, although the recommendation is to implement private peering between the internal
network and Azure virtual networks, while limiting the scope of public peering and Microsoft peering to
on-premises perimeter networks.

Each peering arrangement allows you to connect to all Azure regions in the same geopolitical region as
the location of the ExpressRoute circuit. You can expand the scope of the connectivity globally by
provisioning the ExpressRoute Premium add-on.
MCT USE ONLY. STUDENT USE PROHIBITED
2-46 Implementing and managing Azure networking

Note: The only Azure services not supported by public peering at the time of authoring of
this course include:

• Content Delivery Network (CDN)

• Visual Studio Team Services load testing


• Microsoft Azure Multi-Factor Authentication

• Azure Traffic Manager

From the provisioning standpoint, besides implementing physical connections, you also need to create
one or more logical ExpressRoute circuits. You can identify each individual circuit based on its service key
(s-key), which takes the form of a globally unique identifier (GUID). A single circuit can support up to
three routing domains (private, public, and Microsoft, as listed above). Each circuit has a specific nominal
bandwidth associated with it, which can range between 50 Mbps and 10 Gbps, shared across the routing
domains. You have the option to increase or decrease the amount of provisioned bandwidth without the
need to re-provision the circuit.
In private peering scenarios, establishing a connection to a target virtual network requires creating a link
between the ExpressRoute circuit and the Azure ExpressRoute gateway attached to that virtual network.
As a result, the effective throughput on a per-virtual network basis depends on the SKU of the gateway:
• Standard. Up to 1,000 Mbps. It supports the coexistence of a site-to-site VPN and ExpressRoute.

• HighPerformance. Up to 2,000 Mbps. It supports the coexistence of a site-to-site VPN and


ExpressRoute.

• UltraPerformance. Up to 9,000 Mbps. It supports the coexistence of a site-to-site VPN and


ExpressRoute.

There are three ExpressRoute connectivity models:

• A co-location in a facility hosting an ExpressRoute exchange provider. This facilitates private routing
to Microsoft Cloud by using either Layer 2 or managed Layer 3 cross-connect with the exchange
provider.
• A Layer 2 or managed Layer 3 connection to an ExpressRoute point-to-point provider.

• An any-to-any network (IPVPN) network, implemented commonly as a Multiprotocol Label Switching


(MPLS) cloud, with a wide area network (WAN) provider handling Layer 3 connectivity to the
Microsoft cloud.

Additional Reading: Because ExpressRoute depends on having access to provider services,


its availability depends on the customer location. For up-to-date information, refer to:
“ExpressRoute partners and peering locations” at: https://aka.ms/imjxgy

ExpressRoute routing is dynamic and relies on Border Gateway Protocol (BGP) route exchange between
the on-premises environment and the Microsoft Cloud. You can advertise up to 4,000 prefixes (up to
10,000 with the ExpressRoute Premium add-in) within the private peering routing domain and up to 200
in the case of public peering and Microsoft peering. The prefixes that you advertise via BGP comprise one
or more autonomous systems. Each autonomous system that relies on BGP route exchange has a
corresponding autonomous system number (ASN). There are two types of ASNs: public and private. A
public ASN is globally unique and supports exchanging routing information with any other autonomous
system on the internet. A private ASN is useful in scenarios that involve route exchange with a single
provider only, which eliminates the requirement of global uniqueness. ExpressRoute requires a public ASN
only for Microsoft peering.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-47

To facilitate routing between your on-premises network and the Microsoft internet routers, you will need
to designate several ranges of IP addresses. Specifics of this configuration depend on the peering
arrangement to some extent, but:

• You must choose a pair of /30 subnets or a /29 subnet for each peering type.

• Each of the two /30 subnets will facilitate a separate BGP session. It is necessary to establish two
sessions to qualify for the ExpressRoute availability SLA.

• With private peering, you can use either private or public IP addresses. With public peering and
Microsoft peering, public IP addresses are mandatory.

Some providers manage routing of ExpressRoute traffic as part of their managed services. Usually,
however, when provisioning ExpressRoute via Layer 2 connectivity providers, routing configuration and
management is the customer’s responsibility.

Additional Reading: For more details, refer to: “ExpressRoute routing requirements” at:
http://aka.ms/Bsrgaw

The cost of ExpressRoute depends primarily on the billing model that you choose when provisioning the
service. At the time of authoring this course, there are three billing models:

• Unlimited data. This set monthly fee covers the service as well as an unlimited amount of data
transfers.

• Metered data. This set monthly fee covers the service. There is an additional charge for outbound
data transfers on a per GB basis. Prices depend on the zone where the Azure region resides.
• ExpressRoute Premium add-on. This service extension provides additional ExpressRoute capabilities
including:
o An increased number of routes that can be advertised in public and private peering scenarios, up
to the 10,000-route limits.

o Global connectivity to Microsoft Cloud from a circuit in a single Azure region.

o An increased number of virtual network links from an individual circuit, up to the 100-link limit.

o The ability to implement Microsoft peering.

When you evaluate the total cost of an ExpressRoute-based solution with private peering configuration,
you should also take into account the cost of ExpressRoute gateways that will provide connectivity to
individual virtual networks. As mentioned earlier, the cost of a gateway depends on its SKU.

From the resiliency standpoint, ExpressRoute circuits support a pair of connections between your network
edge devices and Microsoft edge routers via a redundant infrastructure maintained by a connectivity
provider. You must deploy redundant connections on your end of the circuit to qualify for the 99.9
percent circuit availability SLA. In private peering scenarios, each link to an individual virtual network is a
subject to the 99.9 percent availability SLA applicable to the Azure ExpressRoute gateway.

Additional Reading: For more information on ExpressRoute, refer to: “ExpressRoute


technical overview” at: http://aka.ms/B9yy0v
MCT USE ONLY. STUDENT USE PROHIBITED
2-48 Implementing and managing Azure networking

VNet-to-VNet
As mentioned in the first lesson of this module, it is possible to connect virtual networks residing in two
different Azure regions by establishing a VPN tunnel in the manner equivalent to the one applicable when
setting up a site-to-site VPN between an Azure virtual network and an on-premises location.

This involves provisioning a pair of Azure VPN gateways and establishing an IPSec tunnel with shared key
authentication between them. Depending on the specifics of the provisioning process, it is possible to
configure dynamic routing, with BGP route exchange between the two virtual networks. Note that the
connection between two Azure virtual networks is subject to charges associated with running a pair of
VPN gateways, and their throughput limits its bandwidth.

Additional Reading: Just as with Site-to-Site VPN, the ability to use BGP in VNet-to-VNet
connections allows you to implement several previously unsupported scenarios, such as transitive
routing between multiple Azure virtual networks. For more information, refer to: “Overview of
BGP with Azure VPN Gateways” at: https://aka.ms/rsmh1y

VNet Peering
You can connect two virtual networks residing in the same Azure region or different Azure regions by
setting up a VNet Peering between them. This implements direct connectivity without using VPN
gateways. As a result, you not only avoid the VPN gateway-related charges but the resulting latency and
bandwidth match performance characteristics of connections within the same virtual network.

Note: The VNet Peering pricing model takes into account the volume of ingress and egress
data transfers at both ends of peered virtual networks.

Note: At the time of authoring this content, VNet Peering between Azure regions is in
public preview.

Besides performance benefits, VNet Peering offers some additional advantages by allowing routing of
traffic via virtual appliances and VPN gateways between the peered virtual networks. In particular, this
involves the following capabilities:
• Service chaining facilitates routing from one of two virtual networks via a virtual appliance located on
the other
• Gateway transit facilitates routing from one Azure virtual network to your on-premises location via
another Azure virtual network configured with site-to-site VPN or ExpressRoute.

These capabilities allow you to minimize cost and management overhead of your Azure-resident virtual
networking components. Rather than having to provision a separate security appliance on each virtual
network and a dedicated virtual gateway to provide hybrid connectivity, you can create a single hub
virtual network providing these services for all other virtual networks functioning as spokes.

You can establish VNet Peering between the two virtual networks if you satisfy the following
requirements:

• Both virtual networks reside in the same region.

• The virtual networks do not have overlapping IP address spaces.

• The virtual networks belong either to the same Azure subscription or to separate Azure subscriptions
that are associated with the same Azure Active Directory tenant.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-49

• At least one of the two networks are an Azure Resource Manager resource. It is not possible to
establish VNet Peering between two classic virtual networks.

• Your user account has at least read and write permissions on the Virtual Network peering resource
type scoped to the virtual networks that you want to connect via VNet Peering. The built-in Network
Contributor role-based access control (RBAC) role includes these permissions.

• You have not reached the limit on the number of VNet Peerings per virtual network. The default limit
is 10. You can increase it to 50 by contacting Azure support.

Note: Gateway transit requires that both virtual networks have been provisioned by using
the Azure Resource Manager deployment model.

VNet Peering is nontransitive. This means that if you establish VNet Peering between VNet1 and VNet2
and between VNet2 and VNet3, VNet Peering capabilities do not apply between VNet1 and VNet3.
However, you can leverage user-defined routes and service chaining to implement custom routing that
will provide transitivity. This allows you to implement multi-level hub and spoke architecture and
overcome the limit on the number of VNet Peerings per virtual network.

Additional Reading: For information regarding overcoming VNet Peering limits, refer to:
“Implement a hub-spoke network topology in Azure” at: https://aka.ms/cto8hx

Direct connectivity between Classic and Azure Resource Manager resources


You cannot attach classic Azure VMs and cloud service web and worker role instances directly to a virtual
network that you used the Azure Resource Manager deployment model to create. Similarly, you cannot
attach Azure VMs that you used the Azure Resource Manager deployment model to create directly to a
classic virtual network. To allow for direct communication between classic and Azure Resource Manager
resources, you can create a VPNet Peering or VNet-to-VNet connection between a classic virtual network
and an Azure Resource Manager-based virtual network. The choice between the two options depends on
a number of factors, including:
• Estimated volume of data that you intend to transfer between the two virtual networks. The pricing
models for the two connectivity methods differ.

• Bandwidth and latency requirements. VNet Peering offers significant performance benefits.

• Shared Azure AD tenant, in cases where the two virtual networks belong to different subscriptions.
VNet Peering requires that both subscriptions share the same Azure AD tenant. This requirement
does not apply to VNet-to-VNet connectivity.
MCT USE ONLY. STUDENT USE PROHIBITED
2-50 Implementing and managing Azure networking

Configuring P2S VPN connectivity

Configuring a P2S VPN for certificate-


based authentication
To set up a certificate-based P2S VPN, you must
configure an IP address space, configure a virtual
gateway, create certificates, and then install a
client VPN package. You can accomplish this by
using either the Azure portal or Azure PowerShell.
The following sample procedure describes how to
use Azure PowerShell commands to configure a
P2S VPN.

Configure P2S connection for Azure


1. Start Microsoft Azure PowerShell and sign in to your subscription:

Login-AzureRMAccount

2. If there are multiple subscriptions associated with your account, select the target subscription in
which you are going to create a virtual network, and configure a P2S VPN:

Set-AzureRmContext –SubscriptionId <Id of your subscription>

3. Create a new resource group:

New-AzureRMResourceGroup –Name AdatumRG –Location centralus

4. Create a new VNet named (AdatumVNet in this example) with an IP address space (192.168.0.0/16
in this example–adjust accordingly to match the IP address space of your virtual network):

New-AzureRMVirtualNetwork –ResourceGroupName AdatumRG –Name AdatumVNet –AddressPrefix


192.168.0.0/16 –Location centralus

5. Store a reference to the virtual network object in a variable:

$vnet = Get-AzureRMVirtualNetwork –ResourceGroupName AdatumRG –Name AdatumVNet

6. Add a front-end subnet to the new virtual network:

Add-AzureRmVirtualNetworkSubnetConfig -Name FrontEnd -VirtualNetwork $vnet -


AddressPrefix 192.168.0.0/24

7. Add the gateway subnet to the new virtual network:

Add-AzureRmVirtualNetworkSubnetConfig -Name GatewaySubnet -VirtualNetwork $vnet -


AddressPrefix 192.168.254.0/26

8. Create a variable referencing the gateway virtual network subnet for which you will request a public
IP address:

$gwSubnet = Get-AzureRMVirtualNetworkSubnetConfig –Name “GatewaySubnet” –


virtualnetwork $vnet
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-51

9. Request a dynamically assigned IP address:

$pip = New-AzureRMPublicIPAddress –Name AdatumPIP –ResourceGroupName AdatumRG

10. Provide IP configuration that is required for the VPN gateway:

$ipconfig= New-AzureRmVirtualNetworkGatewayIPConfig –Name GWIPConfig –Subnet


$gwSubnet –PublicIPAddress= $pip

11. Update the configuration of the virtual network:

Set-AzureRMVirtualNetwork –VirtualNetwork $vnet.

Generate root and client certificates


You need to use certificates to authenticate clients as they connect to the VPN and to encrypt the
connection. You have the option of generating self-signed certificates, although, in a production
environment, you will most likely rely on your PKI infrastructure instead. In this walkthrough, we will use
the first of these options.

Start by generating a self-signed root certificate, next upload it to the Azure portal, reference it when
creating a client certificate, and finally install the client certificate on the client computer. To complete
these tasks, use the following steps:

1. On Windows 10 computers, you can use the New-SelfSignedCertificate cmdlet to run the following
from an elevated Windows PowerShell console:

$cert = New-SelfSignedCertificate -Type Custom -KeySpec Signature `


-Subject "CN=AdatumRootCertificate" -KeyExportPolicy Exportable `
-HashAlgorithm sha256 -KeyLength 2048 `
-CertStoreLocation "Cert:\CurrentUser\My" -KeyUsageProperty Sign -KeyUsage CertSign

Additional Reading: In earlier versions of Windows, you can use the makecert tool
available as part of the Windows 10 Software Development Kit (SDK) by following the directions
available at: “Generate and export certificates for point-to-site connections using MakeCert” at
https://aka.ms/r9weu9

2. To export the public key of the newly generated certificate, run certmgr.msc. Navigate to the
Certificates – Current User\Personal\Certificates store, right-click the certificate, and then click All
Tasks followed by Export. This will start the Certificate Export Wizard.

3. Use the wizard to export the public key in the Base-64 encoded X.509 (.CER) format. Store it as
C:\cert\AdatumRootCertificate.cer.

4. Run the following sequence of cmdlets to convert the certificate to the proper format and store a
reference to it in a variable:

$certFilePath = "C:\cert\AdatumRootCertificate.cer"
$adatumRootCert = New-Object
System.Security.Cryptography.X509Certificates.X509Certificate2($certFilePath)
$adatumRootCertBase64 = [System.Convert]::ToBase64String($adatumRootCert.RawData)
$adatumVPNRootCert = New-AzureRmVpnClientRootCertificate `
-Name ‘AdatumRootCertificate’ -PublicCertData $adatumRootCertBase64
MCT USE ONLY. STUDENT USE PROHIBITED
2-52 Implementing and managing Azure networking

5. To generate the client certificate, run the following command from the existing elevated Windows
PowerShell console:

New-SelfSignedCertificate -Type Custom -KeySpec Signature `


-Subject "CN=AdatumClientCertificate" -KeyExportPolicy Exportable `
-HashAlgorithm sha256 -KeyLength 2048 `
-CertStoreLocation "Cert:\CurrentUser\My" `
-Signer $cert -TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.2")

Configure a virtual gateway


Point-to-site connections require a virtual gateway in the target virtual network. You also need to prepare
a pool of IP addresses that will be allocated to clients connecting via P2S VPN. In the command below,
you use for this purpose the "172.16.0.0/24" IP address range. To create the virtual gateway, type the
following command, and then press Enter:

New-AzureRmVirtualNetworkGateway -Name AdatumGateway -ResourceGroupName AdatumRG -Location


centralus -IpConfigurations $ipconfig -GatewayType Vpn -VpnType RouteBased -EnableBgp
$false -GatewaySku VpnGw1 -VpnClientAddressPool "172.16.0.0/24" -VpnClientRootCertificates
$adatumVPNRootCert

Create and install the VPN client configuration package


To connect to the VPN, a user must install a VPN client configuration package on the client computer:
1. To retrieve the URL to download a VPN Client Configuration package, type the following command,
and then press Enter:

Get-AzureRmVpnClientPackage -ResourceGroupName AdatumRG -VirtualNetworkGatewayName


AdatumGateway -ProcessorArchitecture Amd64

2. Copy the URL generated from the previous command, paste in a browser, and then download and
install the package.

Connect to the VPN


Now that you have installed both the client certificate and the VPN client configuration package, you can
connect to the virtual network.

1. Navigate to the list of VPN connections and locate the VPN connection that you created as a result of
the installation of the VPN client configuration package. The name of the VPN connection will be the
same as the name of the virtual network in Azure.

2. Right-click the connection, and then click Connect.

3. Click Continue, and then click Connect.

Additional Reading: For information about setting up P2S VPN via the Azure portal,
refer to: “Configure a Point-to-Site connection to a VNet using the Azure portal” at:
https://aka.ms/c3vr2o

Configuring a P2S VPN for RADIUS-based authentication


To set up a RADIUS-based P2S VPN, perform the following high-level steps:

1. Create the target virtual network, including its gateway subnet that will host the VPN gateway.

2. Set up the RADIUS server that will handle authentication for VPN clients. Identify the shared secret
that you will assign to the VPN gateway to configure it as a RADIUS client.

3. Create a route-based VPN gateway.


MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-53

4. Modify the gateway by adding the VPN client IP address pool and then configure it as a RADIUS
client by using the secret that you identified in the previous step.

5. Download the VPN client configuration package by running the


Get-AzureRmVpnClientConfiguration Azure PowerShell cmdlet.

6. Configure Windows, Mac OS X, and Linux VPN clients.

Additional Reading: At the time of authoring this content, this procedure requires the use
of Azure PowerShell. For details, refer to: “Configure a Point-to-Site connection to a VNet using
RADIUS authentication: PowerShell” at https://aka.ms/Uve2sr

Configuring a site-to-site VPN


You can use site-to-site VPN for cross-premises
connectivity between Azure virtual networks and
on-premises networks. You can configure a site-
to-site VPN by using the Azure portal, Azure
PowerShell, Azure CLI, or Azure Resource Manager
templates.

Configuring a site-to site VPN


The following procedure describes a sample site-
to-site VPN setup that uses Azure PowerShell:

Connect to your Azure subscription


1. Start Microsoft Azure PowerShell and sign in
to your subscription:

Login-AzureRMAccount

2. If there are multiple subscriptions associated with your account, select the target subscription in
which you are going to create the virtual network, and then configure a site-to-site VPN:

Set-AzureRmContext –SubscriptionId <Id of your subscription>

Create a virtual network and gateway subnet


1. Create a new resource group:

New-AzureRMResourceGroup –Name AdatumRG –Location centralus

2. Create a new VNet (in this example, its name is AdatumVNet), assign an address space (in this
example, its value is 192.168.0.0/16), and store a reference to the new virtual network in the $vnet
variable:

$vnet = New-AzureRMVirtualNetwork –ResourceGroupName AdatumRG –Name AdatumVNet –


AddressPrefix 192.168.0.0/16 –Location centralus

3. Add a front-end subnet to the new virtual network:

Add-AzureRmVirtualNetworkSubnetConfig -Name FrontEnd -VirtualNetwork $vnet -


AddressPrefix 192.168.0.0/24
MCT USE ONLY. STUDENT USE PROHIBITED
2-54 Implementing and managing Azure networking

4. Add a gateway subnet to the new virtual network:

Add-AzureRmVirtualNetworkSubnetConfig -Name GatewaySubnet -VirtualNetwork $vnet -


AddressPrefix 192.168.254.0/26

5. Update the configuration of the virtual network:

Set-AzureRMVirtualNetwork –VirtualNetwork $vnet

Add a local site


• Specify the properties of the on-premises network and store them in the variable $gwlocal. You must
provide the following values:

o Name. Provide a name for the on-premises network.

o GatewayIpAddress. Specify the external IP address of your on-premises VPN device.

o Address Prefix. Specify the IP address space of your on-premises network.

$gwlocal = New-AzureRmLocalNetworkGateway -Name LocalSite -ResourceGroupName AdatumRG


-Location centralus -GatewayIpAddress '15.21.115.234' -AddressPrefix '10.0.0.0/24'

Request a public IP address for the Azure VPN gateway, and configure the IP
addressing configuration
1. Request a dynamically assigned IP address:

$gwpip = New-AzureRmPublicIPAddress –Name AdatumGWPIP –ResourceGroupName AdatumRG –


Location centralus –AllocationMethod Dynamic

2. Create a variable referencing the gateway subnet of the VNet:

$gwsubnet = Get-AzureRmVirtualNetworkSubnetConfig –Name “GatewaySubnet” –


virtualnetwork $vnet

3. Create the IP configuration required for the VPN gateway:

$gwipconfig = New-AzureRmVirtualNetworkGatewayIPConfig –Name GWIPConfig –SubnetId


$gwsubnet.Id –PublicIPAddress $gwpip

Create a virtual gateway


• Create a virtual gateway that will be used for the site-to-site VPN connection, and then store a
reference to it in the variable $gwremote. Include the following parameters:

o GatewayType: Specify the gateway type to be VPN.

o VpnType: Specify RouteBased VPN type or PolicyBased VPN type. The type must match your on-
premises VPN device.

$gwremote = New-AzureRmVirtualNetworkGateway -Name AdatumGateway -ResourceGroupName


AdatumRG -Location centralus -IpConfigurations $gwipconfig -GatewayType Vpn -VpnType
RouteBased –GatewaySku VpnGw1
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-55

Configure a VPN device


• A site-to-site VPN requires an on-premises VPN device, which routes traffic from the on-premises
network to the virtual network and receives traffic from the virtual gateway.

Additional Reading: As mentioned earlier, for a list of VPN devices that Microsoft has
validated in partnership with their vendors, and their configuration instructions, refer to: “About
VPN devices for Site-to-Site VPN Gateway connections” at: http://aka.ms/Frtaeb

Create a VPN connection


• Create the VPN connection (named here localtoazure) between the on-premises VPN gateway and
the virtual network gateway that you created in your Azure virtual network. You need to provide the
same shared key during on-premises VPN gateway configuration:

New-AzureRmVirtualNetworkGatewayConnection -Name localtoazure -ResourceGroupName


AdatumRG -Location centralus -VirtualNetworkGateway1 $gwremote -LocalNetworkGateway2
$gwlocal -ConnectionType IPsec -RoutingWeight 10 -SharedKey 'abc123'

Verify the VPN connection


• Use the following command to verify the VPM connection:

Get-AzureRmVirtualNetworkGatewayConnection -Name localtoazure -ResourceGroupName


AdatumRG -Debug

Additional Reading: For information about setting up P2S VPN via Azure CLI, refer to:
“Create a virtual network with a Site-to-Site VPN connection using CLI” at:
https://aka.ms/wym5yd

Configuring a VNet-to-VNet VPN


You can use a VNet-to-VNet VPN to connect
virtual networks in two different Azure regions.
They also can be in the same Azure subscription
or different subscriptions. You can configure a
site-to-site VPN by using the Azure portal, Azure
PowerShell, Azure CLI, or Azure Resource Manager
templates.

Configuring a VNet-to-VNet VPN connection is


similar to a site-to-site VPN connection with one
difference: the other side of the connection is not
an on-premises network, but another Azure virtual
network. The following procedure outlines the
high-level steps for creating a VNet-to-VNet VPN connection:

1. Connect to your Azure subscription.

2. Create the first virtual network.


3. Request a public IP address, and create the gateway configuration.

4. Create the gateway.


MCT USE ONLY. STUDENT USE PROHIBITED
2-56 Implementing and managing Azure networking

5. Create the second virtual network and its gateway.

6. Connect the gateways.

Creating a VNet-to-VNet VPN connection


The following procedure lists the steps to use Azure PowerShell to create a sample VNet-to-VNet VPN
connection.

Connect to your subscription from Azure PowerShell


1. Start Microsoft Azure PowerShell and sign in to your Azure subscription:

Login-AzureRmAccount

2. If there are multiple subscriptions associated with your account, select the target subscription in
which you are going to create the virtual network, and then configure a site-to-site VPN:

Set-AzureRmContext –SubscriptionId <Id of your subscription>

Create a virtual network and gateway subnet


1. Create a new resource group:

New-AzureRmResourceGroup –Name AdatumRG –Location centralus

2. Create a new VNet (in this example, its name is AdatumVNet), assign an address space (in this
example, its value is 192.168.0.0/16), and store a reference to the new virtual network in the $vnet
variable:

$vnet = New-AzureRmVirtualNetwork –ResourceGroupName AdatumRG –Name AdatumVNet –


AddressPrefix 192.168.0.0/16 –Location centralus

3. Add a front-end subnet to the new virtual network:

Add-AzureRmVirtualNetworkSubnetConfig -Name FrontEnd -VirtualNetwork $vnet -


AddressPrefix 192.168.0.0/24

4. Add a gateway subnet to the new virtual network:

Add-AzureRmVirtualNetworkSubnetConfig -Name GatewaySubnet -VirtualNetwork $vnet -


AddressPrefix 192.168.254.0/26

5. Update the configuration of the virtual network:

Set-AzureRmVirtualNetwork –VirtualNetwork $vnet

Request a public IP address for the Azure VPN gateway, and configure the IP
addressing configuration
1. Request a dynamically assigned IP address:

$gwpip1 = New-AzureRmPublicIPAddress –Name AdatumGWPIP –ResourceGroupName AdatumRG –


Location centralus –AllocationMethod Dynamic

2. Create a variable referencing the gateway subnet of the virtual network:

$gwsubnet1= Get-AzureRmVirtualNetworkSubnetConfig –Name “GatewaySubnet” –


virtualnetwork $vnet
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-57

3. Provide the IP configuration required for the VPN gateway:

$gwipconfig1= New-AzureRmVirtualNetworkGatewayIPConfig –Name GWIPConfig –SubnetId


$gwsubnet1.Id –PublicIPAddressId $gwpip1.Id

Create a virtual gateway


• Create a virtual gateway that will be used for site-to-site VPN connection and store a reference to it in
the variable $vnetgw1. You need to specify:

o GatewayType. Define the gateway type as VPN.

o VpnType. Configure RouteBased VPN type.

$vnetgw1 = New-AzureRmVirtualNetworkGateway -Name AdatumGateway -ResourceGroupName


AdatumRG -Location centralus -IpConfigurations $gwipconfig1 -GatewayType Vpn -VpnType
RouteBased –GatewaySku VpnGw1

Create a second virtual network


• Follow the same procedure as described above, to create a second virtual network and its VPN
gateway (which we will refer to here as $vnetgw2).

Connect the VPN gateways


• Create connections to enable communications from both networks, by using the same shared key:

New-AzureRmVirtualNetworkGatewayConnection -Name conn1 -ResourceGroupName AdatumRG -


VirtualNetworkGateway1 $vnetgw1 -VirtualNetworkGateway2 $vnetgw2 -Location centralus
-ConnectionType Vnet2Vnet -SharedKey 'abc123'
New-AzureRmVirtualNetworkGatewayConnection -Name conn2 -ResourceGroupName AdatumRG -
VirtualNetworkGateway1 $vnetgw2 -VirtualNetworkGateway2 $vnetgw1 -Location westus -
ConnectionType Vnet2Vnet -SharedKey 'abc123'

Additional Reading: For information about setting up site-to-site VPN via Azure CLI, refer
to: “Create a virtual network with a Site-to-Site VPN connection using CLI“ at:
https://aka.ms/wym5yd

Configuring VNet Peering


You can use VNet Peering to connect virtual
networks in the same Azure region or different
Azure regions. The virtual networks can be in the
same Azure subscription or in different
subscriptions, as long as they share the same
Azure AD tenant. VNet Peering also allows you to
connect two virtual networks created by using
different deployment models.
MCT USE ONLY. STUDENT USE PROHIBITED
2-58 Implementing and managing Azure networking

You can configure VNet Peering by using the Azure portal, Azure PowerShell, Azure CLI, or Azure
Resource Manager templates. The following procedure outlines the high-level steps for creating a VNet
Peering connection between two Azure Resource Manager–based virtual networks in the same Azure
subscription:

1. Connect to your Azure subscription.

2. Create the first virtual network.

3. Create the second virtual network.

4. Configure VNet Peering in the first virtual network.

5. Configure VNet Peering with matching settings in the second virtual network.

Creating a VNet Peering connection by using Azure PowerShell


The following procedure outlines the steps to use Azure PowerShell to create a sample VNet Peering
connection.

Connect to your Azure subscription


1. Start Microsoft Azure PowerShell and sign in to your Azure subscription:

Login-AzureRmAccount

2. If there are multiple subscriptions associated with your account, select the target subscription in
which you are going to create the virtual network, and then configure a site-to-site VPN:

Set-AzureRmContext –SubscriptionId <Id of your subscription>

Create the first virtual network


1. Create a new resource group:

New-AzureRmResourceGroup –Name AdatumRG1 –Location centralus

2. Create a new VNet (in this example, its name is AdatumVNet1), assign an address space (in this
example, its value is 192.168.0.0/24), and store a reference to the new virtual network in the $vnet1
variable:

$vnet1 = New-AzureRmVirtualNetwork –ResourceGroupName AdatumRG1 –Name AdatumVNet1 –


AddressPrefix 192.168.0.0/24 –Location centralus

3. Add a front-end subnet to the new virtual network:

Add-AzureRmVirtualNetworkSubnetConfig -Name FrontEnd -VirtualNetwork $vnet1 -


AddressPrefix 192.168.0.0/26

4. Update the configuration of the virtual network:

Set-AzureRmVirtualNetwork –VirtualNetwork $vnet1


MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-59

Create the second virtual network


1. Create a new resource group:

New-AzureRmResourceGroup –Name AdatumRG2 –Location centralus

2. Create a new VNet (in this example, it name is AdatumVNet2), assign an address space (in this
example, its value is 192.168.1.0/24), and store a reference to the new virtual network in the $vnet2
variable:

$vnet2 = New-AzureRmVirtualNetwork –ResourceGroupName AdatumRG2 –Name AdatumVNet2 –


AddressPrefix 192.168.1.0/24 –Location centralus

3. Add a front-end subnet to the new virtual network:

Add-AzureRmVirtualNetworkSubnetConfig -Name FrontEnd -VirtualNetwork $vnet2 -


AddressPrefix 192.168.1.0/26

4. Update the configuration of the virtual network:

Set-AzureRmVirtualNetwork –VirtualNetwork $vnet2

Configure VNet Peering in the first virtual network


• Create VNet peering from the first virtual network to the second network:

Add-AzureRmVirtualNetworkPeering `
-Name 'AdatumVnet1ToVnet2' `
-VirtualNetwork $vnet1 `
-RemoteVirtualNetworkId $vnet2.Id

Configure VNet Peering in the second virtual network


• Create VNet peering from the second virtual network to the first network:

Add-AzureRmVirtualNetworkPeering `
-Name 'AdatumVnet2ToVnet1' `
-VirtualNetwork $vnet2 `
-RemoteVirtualNetworkId $vnet1.Id

Verify the VNet Peering status


• To verify the status of the peering operation, run:

Get-AzureRmVirtualNetworkPeering `
-ResourceGroupName AdatumRG1 `
-VirtualNetworkName AdatumVnet1

followed by:

Get-AzureRmVirtualNetworkPeering `
-ResourceGroupName AdatumRG2 `
-VirtualNetworkName AdatumVnet2

The output of both commands should include Connected as the value of the PeeringState property.
MCT USE ONLY. STUDENT USE PROHIBITED
2-60 Implementing and managing Azure networking

Creating a VNet Peering connection by using the Azure portal


When creating a VNet Peering connection between two virtual networks in the Azure portal, you can
configure the following settings:

• Allow virtual network access. You must enable this setting on both virtual networks that participate in
the peering arrangement for traffic to flow between them.

• Allow forwarded traffic. You can enable this setting to route traffic to a network peered with either of
the two networks you are peering. This is common in hub and spokes configurations, where spokes
must be able to communicate via the hub. You should enable this setting on the spokes and the hub
to ensure routing between spokes. This is also necessary if you implement service chaining.

• Allow gateway transit. You must enable this setting to allow traffic originating from a peering partner
to flow via a VPN or ExpressRoute gateway on the virtual network that hosts the gateway. The traffic
can flow either to the on-premises environment or to another virtual network peered with this
network.
• Use remote gateways. This setting complements the Allow gateway transit setting. You must enable
it on the peering partner that wants to route its traffic via an ExpressRoute or VPN gateway that is
residing on the virtual network where you configured the Allow gateway transit setting.

Additional Reading: For information about setting up VNet Peering via the Azure portal
and Azure CLI, refer to: “Create a virtual network peering - Resource Manager, same subscription“
at: https://aka.ms/e9b8g6

Check Your Knowledge


Question

What is the throughput of High Performance SKU of ExpressRoute gateway?

Select the correct answer.

200 Mbps

500 Mbps

1000 Mbps

2000 Mbps

9000 Mbps
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 2-61

Lab B: Configuring VNet Peering


Scenario
Now that Adatum Corporation has deployed Azure Resource Manager VNets, the company wants to be
able to provide direct connectivity between them. Your plan is to implement VNet Peering to provide the
optimal performance with minimum cost.

Objectives
After completing this lab, you should be able:

• Connect Azure virtual networks using VNet Peering.

• Configure VNet Peering-based service chaining.

• Validate virtual network connectivity using Azure-based and VM-based tools.

Note: The lab steps for this course change frequently due to updates to Microsoft Azure.
Microsoft Learning updates the lab steps frequently, so they are not available in this manual. Your
instructor will provide you with the lab documentation.

Lab Setup
Estimated Time: 35 minutes

Virtual machine: 20533E-MIA-CL1

User name: Student


Password: Pa55w.rd

Before you begin this lab, ensure that you have completed the first lab in this module: Creating virtual
networks.

Question: What do you consider to be the most important advantages of VNet Peering?
MCT USE ONLY. STUDENT USE PROHIBITED
2-62 Implementing and managing Azure networking

Module Review and Takeaways


Best Practices
• Use Azure Resource Manager templates to simplify virtual network provisioning.

• Consider optimizing network throughput, especially on Azure VM sizes that include Accelerated
Networking features, by enabling Receive Side Scaling (RSS). For details, refer to: “Optimize network
throughput for Azure virtual machines” at: https://aka.ms/udj3xg

Common Issues and Troubleshooting Tips


Common Issue Troubleshooting Tip

Typical issues can include: • Use the Start-


• Site-to-site VPN tunnel failed AzureVNetGatewayDiagnostics cmdlet to
begin the capture process and analyze the
• Wrong virtual network configuration logs.
• Use the Test-NetConnection command to
try sending traffic across the tunnel from
each side.
• Review the current Azure Resource
Manager template or network
configuration file for classic virtual
networks and correct any problems.
• Note that the tracert tool is not supported
for troubleshooting end-to-end
connectivity in Azure virtual networks.

Misconfigured user-defined routes in an Refer to: “Troubleshoot routes using the Azure
Azure virtual network Portal” at: https://aka.ms/hrgk7r and
“Troubleshoot routes using Azure PowerShell”
at: https://aka.ms/omhqqx

Misconfigured NSGs in an Azure virtual Refer to: “Troubleshoot Network Security


network Groups using the Azure Portal” at:
https://aka.ms/tuv6q3 and “Troubleshoot
Network Security Groups using Azure
PowerShell” at: https://aka.ms/x74e26

Review Question

Question: What are the considerations for choosing a name resolution solution for an Azure
virtual network–based deployment?
MCT USE ONLY. STUDENT USE PROHIBITED
3-1

Module 3
Implementing Microsoft Azure Virtual Machines and
virtual machine scale sets
Contents:
Module Overview 3-1

Lesson 1: Overview of Virtual Machines and virtual machine scale sets 3-2

Lesson 2: Planning deployment of Virtual Machines


and virtual machine scale sets 3-5

Lesson 3: Deploying Virtual Machine and virtual machine scale sets 3-19

Lab: Deploying Virtual Machines 3-39

Module Review and Takeaways 3-40

Module Overview
Virtual machines (VMs) are the most flexible resources available for deploying your workloads in Microsoft
Azure. You can use Virtual Machines to host custom services and applications, implement infrastructure
roles, or serve as a target for lift-and-shift migrations to the cloud. If your workloads are stateless and
require autoscaling capabilities, you can deploy them into virtual machine scale sets.

This module introduces the core capabilities of Virtual Machines and virtual machine scale sets and
presents different ways in which you can provision them.

Objectives
After completing this module, you will be able to:

• Describe the main characteristics of Virtual Machines and virtual machine scale sets.

• Plan for deployment of Virtual Machines and virtual machine scale sets.

• Deploy Virtual Machines and virtual machine scale sets.


MCT USE ONLY. STUDENT USE PROHIBITED
3-2 Implementing Microsoft Azure Virtual Machines and virtual machine scale sets

Lesson 1
Overview of Virtual Machines and virtual machine
scale sets
VMs provide many advantages over physical computers. You can deploy VMs on physical servers in your
on-premises environment, or you can deploy virtual machines in Azure. In this lesson, you will learn about
the differences between these virtualization environments. You will also learn about features that are
unique to Virtual Machines.

Lesson Objectives
After completing this lesson, you will be able to:

• Prepare the lab environment.

• Describe Virtual Machines.


• Describe virtual machine scale sets.

Demonstration: Preparing the lab environment


Perform the tasks in this demonstration to prepare the lab environment. The environment will be
configured while you progress through this module, learning about the Azure services that you will use in
the lab.

Important: The scripts used in this course might delete objects in your subscriptions.
Therefore, you should complete this course by using a new Azure subscription. You should also
use a new Microsoft account that is not associated with any other Azure subscription. This will
eliminate the possibility of any potential confusion when running setup scripts.

This course relies on custom Azure PowerShell modules, including Add-20533EEnvironment to prepare
the lab environment for labs, and Remove-20533EEnvironment to perform clean-up tasks at the end of
the module.

What are Virtual Machines?


Virtual Machines are an Infrastructure as a Service
(IaaS) compute service offering available in Azure.
When compared with other compute services,
Virtual Machines provide the greatest degree of
control over the configuration of the virtual
machine and its operating system. You can
configure the operating system running within a
VM by using Virtual Machine Extensions (VM
Extensions), including methods such as custom
Windows PowerShell scripts, Desired State
Configuration (DSC), Chef, or Puppet.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 3-3

Virtual Machines share most of their characteristics with the Microsoft Hyper-V VMs you deploy in your
on-premises datacenter. However, several important differences exist between them, such as:

• Virtual Machines that you can provision are available in specific sizes. You cannot specify arbitrary
processing, memory, or storage parameters when deploying a virtual machine. Instead, you must
select one of the predefined choices. At the time of writing this course, Microsoft offers virtual
machines in two tiers: Basic and Standard. The Basic tier, intended for development and test
workloads, includes five virtual machine sizes, ranging from 1 core with 0.75 gigabytes (GB) of RAM to
8 cores with 14 GB of RAM. The Standard tier has several series, including A, Av2, B, D, Dv2, Dv3, DS,
DSv2, DSv3, Ev3, ESv3, F, Fs, Fsv2, G, GS, H, Ls, M, NV, NC, NCv2, NCv3, and ND for a total of more
than 90 virtual machine sizes. The largest of them feature 128 virtual central processing units (vCPUs),
3800 GB of RAM, and up to 64 disks.

• There is a 2-terabyte (TB) size limit on a virtual disk hosting a Virtual Machine’s operating system and
a 4-TB size limit on any additional virtual disk that you attach to an Virtual Machine. Note that this
does not imply a limit on the size of data volumes. You can create multiple-disk volumes by using
Storage Spaces in Windows Server or volume managers, such as Logical Volume Manager (LVM) in
Linux. Because the largest Virtual Machine size supports up to 64 data disks, you can create volumes
of up to 264 TB by using this approach. The maximum volume size depends on the size of the virtual
machine, which determines the maximum number of disks you can attach to that virtual machine.

• A limit also exists on the throughput and input/output operations per second (IOPS) that individual
disks support. With Standard storage, you should expect about 60 megabytes per second (MBps) or
500 8-kilobyte (KB) IOPS. With Azure Premium storage, performance depends on the disk size, with
4-TB disks supporting up to 250 MBps and 7,500 256-KB IOPS. If you need to increase per-volume
performance beyond these limits, you can increase the throughput and IOPS by creating multiple-
disk volumes.

• At the time of writing this course, any virtual disks that you intend to attach to Virtual Machines must
be in the .vhd format. There is also no support for Generation 2 Hyper-V virtual machines in Azure.
Additionally, no support exists for dynamically expanding or differencing virtual disks—they all must
be fixed.
Virtual Machines, like other cloud-based services, are inherently more agile than on-premises virtual
machines. You can provision and scale them on an as-needed basis, without investing in dedicated
hardware. This makes Virtual Machines the most suitable solution in scenarios that must accommodate
dynamically changing workloads. In addition, the ease of provisioning and deprovisioning makes Virtual
Machines ideal for proof-of-concept or development scenarios, where the need for compute resources is
temporary.

In each of these scenarios, you also benefit from the pricing model applicable to Virtual Machines. When
you run Virtual Machines, you pay for the compute time on a per-second basis. The price for VMs is
calculated based on their size, the operating system, and any licensed software installed on the VM. A
running virtual machine requires allocation of Azure compute resources. Therefore, to avoid the
corresponding charges whenever you are not using it, you should change its state to Stopped
(Deallocated).
MCT USE ONLY. STUDENT USE PROHIBITED
3-4 Implementing Microsoft Azure Virtual Machines and virtual machine scale sets

What are virtual machine scale sets?


A virtual machine scale set is an Azure compute
resource consisting of up to 1000 identically
configured VMs that you deploy from the same
VM image and manage as a single entity. Their
primary purpose is to host stateless workloads
that require support for manual or automatic
horizontal scaling.

All VMs in a virtual machine scale set reside on a


single subnet of a virtual network. By default, their
deployment is highly available, since each set of
100 VMs is automatically part of the same
availability set. In addition, the platform attempts
to minimize deployment time and mitigate any potential provisioning failures. By default, it creates more
VMs than you initially specified and deletes the extra VMs after the deployment completes. You can
disable this behavior by turning off the overprovisioning VM scale setting when using deployment
templates.

You can deploy VMs in a scale set based on either an Azure Marketplace image or a custom image.
Choosing a custom image limits the maximum number of VMs in a scale set to 300. The maximum
number of VMs in a scale set also depends on whether you configure them with managed or unmanaged
disks. With managed disks, the maximum number of VMs in a scale set based on a Marketplace image is
100. To ensure acceptable storage I/O performance, this number should not exceed 20 when using a
custom VM image with enabled overprovisioning. If you disable overprovisioning, this number increases
to 40.

Note: You will learn about availability sets and managed disks in the next lesson of this
module.

Virtual machine scale sets integrate with Azure Basic Load Balancer, Azure Standard Load Balancer, and
Azure Application Gateway. Azure Basic Load Balancer supports scale sets of up to 100 VMs. To provide
load balancing for more VMs, you can use Azure Standard Load Balancer for layer 4 load balancing or
Azure Application Gateway for layer 7 load balancing.

Note: Deployment of a virtual machine scale set from the Azure portal will, by default,
provision an Azure Standard Load Balancer.

Note: To implement a virtual machine scale set with more than 100 VMs, you must set its
singlePlacementGroup property to False. This option is configurable when deploying virtual
machine scale sets from the Azure portal, or via Azure PowerShell, Azure CLI, or Azure Resource
Manager deployment templates.

Question: What are the primary differences between on-premises Hyper-V VMs and Virtual
Machines?
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 3-5

Lesson 2
Planning deployment of Virtual Machines and virtual
machine scale sets
There are multiple reasons for deploying Virtual Machines and virtual machine scale sets. For example,
you might be implementing a new cloud-based application or moving an existing on-premises workload
to Azure. This lesson introduces you to the key considerations for planning the deployment of Virtual
Machines and virtual machine scale sets. It also provides information to help you identify workloads that
are suitable for each of them.

Lesson Objectives
After completing this lesson, you will be able to:

• Identify the workloads for Virtual Machines and virtual machine scale sets.
• Describe the considerations for Virtual Machine sizing.

• Describe the availability and scalability considerations for Virtual Machines and virtual machine
scale sets.
• Describe the storage options for Virtual Machines and virtual machine scale sets.

• Explain the primary benefits of managed disks.

• Create an availability set for Virtual Machines.

Identifying workloads for Virtual Machines and virtual machine scale sets

Virtual Machine workloads


The following types of workloads can benefit
significantly from the scalability, resiliency, and
agility of Virtual Machines:

• Periodic workloads, such as:

o A complex data analysis of sales figures


that an organization needs to run at the
end of each month.

o Seasonal marketing campaigns on an


organization’s website.

o Annual retail sales spurts that might occur during festive holidays.
• Unpredictable-growth workloads, such as those resulting from an organization’s rapid expansion or
from short-term increases in sales of fad products.

• Spiking workloads, such as those experienced by websites that provide news services or by branch
offices that perform end-of-day reporting for a main office.
• Workloads that require high availability and multiregion resiliency, such as commercial online stores.

• Steady workload scenarios where the operational costs in Azure are lower than the combination of
on-premises capital expenditures and maintenance overhead.
MCT USE ONLY. STUDENT USE PROHIBITED
3-6 Implementing Microsoft Azure Virtual Machines and virtual machine scale sets

On the other hand, some workloads might not be suitable for Virtual Machines. One example is low-
volume or limited-growth workloads that can run on premises on commodity hardware. Another is a
highly regulated environment—for example, where a local government restricts the type of data that
organizations or companies can host in the cloud. Yet another might involve workloads with licenses
associated with underlying hardware. In such situations, you might consider implementing a hybrid
solution, with workloads split between Azure and an on-premises environment.

Virtual machine scale set workloads


The primary advantage of virtual machine scale sets over Virtual Machines is support for large-scale, rapid
deployments and automatic horizontal scaling. Since scaling in involves deprovisioning of VMs, the scale
set workloads should be stateless. Some common scenarios in which VM scale sets deliver significant
benefits include big compute, big data, and containerized workloads. However, VM scale sets are suitable
whenever you need to simplify management of multiple stateless VMs, especially if you also require
autoscaling capabilities.

Operating system support for Virtual Machines and virtual machine scale sets
Marketplace includes images of all the currently supported versions of the Windows Server operating
system. A custom support agreement (CSA) is necessary to obtain support for operating systems that have
reached the end date of their extended support. Virtual Machines also support a variety of Linux
distributions, including CentOS, CoreOS, Debian, Oracle Linux, Red Hat, SUSE, openSUSE, and Ubuntu.
Some Azure subscription types, such as Microsoft Developer Network (MSDN) subscriptions, also provide
access to Windows client operating system images.

Software support for Virtual Machines


Virtual Machines support a wide range of Microsoft server software, including:

• Microsoft BizTalk Server 2013 and newer


• Microsoft Dynamics AX 2012 R3 and newer

• Microsoft Dynamics CRM 2013 and newer

• Microsoft Dynamics GP 2013 and newer


• Microsoft Exchange 2013 and newer

• Microsoft Forefront Identity Manager (FIM) 2010 R2 with Service Pack 1 (SP1) and newer

• Microsoft Identity Manager (MIM)

• Microsoft HPC Pack 2012 and newer

• Microsoft Project Server 2013 and newer

• Microsoft SharePoint Server 2010 and newer


• Microsoft SQL Server 2008 (64-bit) and newer

• Microsoft System Center 2012 SP1 and newer (with the exception of System Center Virtual Machine
Manager)

• Microsoft Team Foundation Server 2012 and newer


MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 3-7

The following table shows the Windows Server roles that Virtual Machines do and do not support.

Supported Windows Server roles Unsupported server roles

• Active Directory Domain Services (AD DS) • Dynamic Host Configuration Protocol (DHCP)
Server
• Active Directory Federation Services
• Remote Access (Direct Access)
• Active Directory Lightweight Directory
Services • Rights Management Services (RMS)
• Application Server • Windows Deployment Services (Windows DS)
• DNS Server
• Failover Clustering
• File Services
• Hyper-V (VM series–dependent)
• Network Policy and Access Services
• Print and Document Services
• Remote Access (Web Application Proxy)
• Remote Desktop Services
• Web Server (Internet Information Services)
• Windows Server Update Services

Note: You can install the Hyper-V role on the Dv3 and Ev3 series of Virtual Machines, which
support nested virtualization.

Virtual Machines also do not support several Windows Server features:


• Microsoft iSNS Server

• Multipath I/O (MPIO)

• Network Load Balancing (NLB)

• Peer Name Resolution Protocol (PNRP)

• Simple Network Management Protocol (SNMP) service

• Storage Manager for storage area networks (SANs)


• Windows Internet Name Service (WINS)

• Wireless local area network (LAN) service


MCT USE ONLY. STUDENT USE PROHIBITED
3-8 Implementing Microsoft Azure Virtual Machines and virtual machine scale sets

Virtual Machines and virtual machine scale set sizing


Deploying virtual machines in Azure differs from
deploying them in an on-premises Hyper-V
environment. When you manage the hypervisor
platform, you can configure all VM settings any
way you like. In Azure, you select from a range of
predefined configuration options that correspond
to different VM sizes. The VM size determines
characteristics such as the number and speed of
its processors, amount of memory, maximum
number of network adapters or data disks you can
attach to it, and maximum size of a temporary
disk.

Note: A Virtual Machine’s temporary disk resides on the Hyper-V host where the VM runs.
The operating system and data disks of a Virtual Machine reside in Azure Storage.

Note: Considering the number and variety of Virtual Machine sizes, you should be able to
find an appropriate option for most workloads. If the requirements of your workload change, you
can resize a Virtual Machine if the current size does not violate the constraints of the target size.
For example, you might need to remove an extra virtual network adapter or a data disk attached
to your VM before you scale it down to a smaller size.

Note: Changing the size of a VM automatically triggers its restart.

The primary factor that determines size, performance, and capabilities of a Virtual Machine is its tier.
There are two tiers of Virtual Machines—Basic and Standard. You might consider using the Basic tier VMs
for any nonproduction workloads that do not require features such as load balancing, autoscaling, or high
availability, and for which you can tolerate disk I/O in the range of 300 IOPS per disk. Note that the Basic
tier VMs do not qualify for any Service Level Agreements pertaining to availability. On the other hand, the
prices of the Basic tier VMs are lower when compared to the Standard tier VMs. There are only a few VM
sizes in the Basic tier–A0 to A4. A Basic_A0 VM is the smallest in this category. It offers a single central
processing unit (CPU) core, 768 MB of memory, and a single data disk. The largest VM in the Basic tier is
the Basic A4 VM with 8 CPU cores, 14 GB of memory, and up to 16 data disks.

Note: Most VMs in Azure are part of the Standard tier offering. The remainder of this topic
will focus on the Standard VM sizes.

All VM sizes support Standard storage, which offers performance equivalent to magnetic disks. On the
Standard tier VMs, Standard storage delivers 500 IOPS per disk. On the Basic tier VMs, Standard storage
delivers 300 IOPS per disk. Many VM sizes also support high-end storage with performance equivalent to
solid-state drives (SSDs). This type of storage is referred to as Microsoft Azure Premium storage. You can
easily distinguish these VM sizes because they include the letter S in the VM size designation.

Note: For details regarding Premium storage, refer to Module 6, “Planning and
implementing Azure storage.”
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 3-9

VM sizes in Azure
Each VM size has a corresponding identifier, consisting of a combination of one or more letters and
numbers. The leading letter (or, in some cases, letters and a digit) designates a collection of VM sizes
referred to as VM series that share common configuration characteristics such as:

• CPU type

• CPU-to-memory ratio

• Support for SSD-based temporary disks

• Support for Premium storage


Each series includes multiple VM sizes, which differ in the number of CPU cores, amount of memory, size
of the local temporary disk, and the maximum number of network adapters and data disks. For VM sizes
that support Premium storage, an additional difference is the maximum aggregate disk I/O performance.

VM sizes are grouped into the following categories:

• General purpose. This category offers a balanced CPU-to-memory ratio, making it most suitable for
testing, development, and hosting small to medium databases or web servers. It includes A0-A7, Av2,
D, Dv2, Dv3, DS, DSv2, and DSv3 series VM sizes.

• Burstable. This category offers considerably reduced pricing for workloads that have relatively low
resource utilization but might occasionally require a boost in the CPU performance. The Azure
platform throttles CPU performance for Virtual Machines in this category at a specific threshold,
which varies depending on the VM size. However, if the CPU utilization remains below that threshold,
the Virtual Machine accumulates credits. The platform applies credits according to processing
demands to allow the Virtual Machine to run above the threshold for a limited duration. This category
consists of B series VM sizes.

• Compute optimized. This category offers a high CPU-to-memory ratio, making it most suitable for
compute-intensive workloads that do not have extensive memory requirements. Such characteristics
are typical for medium-size traffic web servers or application servers, network appliances, or servers
handling batch processing. This category includes F, Fs, and Fsv2 series VM sizes.
• Memory optimized. This category offers a high memory-to-CPU ratio, making it most suitable for
memory-intensive workloads that do not have extensive compute requirements. Such characteristics
are typical for workloads that keep the bulk of their operational content in memory, such as database
or caching servers. This category includes D, Dv2, DS, DSv2, Ev3, Esv3, M, G, and GS series VM sizes.

• Storage optimized. This category offers high-performance disk I/O, most suitable for big data
processing with both SQL and non-SQL database management systems. This category consists of the
Ls VM sizes.

• GPU. This category offers Graphic Processing Unit support, with thousands of CPU cores, ideal for
implementing workloads such as graphic rendering, video editing, crash simulations, or deep
learning. This category includes NC, NCv2, NCv3, NV, and ND series VM sizes.

• High performance compute. This category offers VMs with the fastest CPUs and, with some VM sizes,
high-throughput Remote Direct Memory Access (RDMA) network interfaces. This category includes H
series VMs and A8-A11 VM sizes.

Note: Dv3, Dsv3, Ev3, and Esv3 series Virtual Machines also provide support for nested
virtualization.
MCT USE ONLY. STUDENT USE PROHIBITED
3-10 Implementing Microsoft Azure Virtual Machines and virtual machine scale sets

Additional Reading: For more information on virtual machine sizes, including any changes
since this course was published, refer to: “Sizes for virtual machines in Azure” at:
http://aka.ms/Iyrbvv

Additional Reading: Virtual machine scale sets support a subset of Virtual Machine sizes.
For more information regarding these VM sizes, including any changes since this course was
published, refer to: “Vertical autoscale with virtual machine scale sets” at: https://aka.ms/Paz31j

Virtual Machine and virtual machine scale set availability and scalability
It is important that your Virtual Machine–based
workloads are resilient to hardware failures. They
also should remain online during maintenance
events that might occur occasionally within the
Azure infrastructure. While you are responsible for
implementing high availability for Virtual Machine
workloads within the operating system, you must
also consider certain provisions to ensure high
availability on the platform level.

If your workload supports load balancing or


failover across multiple operating system
instances, you can use either of these two
methods to implement its resiliency. In such scenarios, you can also implement platform-level resiliency by
using two platform features:

• Availability sets. This feature offers the 99.95% availability service level agreement (SLA) by deploying
Virtual Machine across multiple physical locations within the same Azure datacenter.
• Availability zones. This feature offers the 99.99% availability SLA by deploying Virtual Machines across
multiple Azure datacenters.
If your workload runs on a single Virtual Machine, you can benefit from the 99.99% availability SLA, which
applies as long as all the virtual machine disk files use Azure Premium storage.

Note: At the time of authoring this content, Azure availability zones are in preview.

Understanding availability sets


An availability set is an Azure resource that typically contains two or more VMs. By deploying VMs into
the same availability set, you inform the platform that these VMs will be hosting a highly available
workload. Therefore, the platform will provision these VMs across separate racks within the same Azure
datacenter.

Availability sets remediate two types of events that result in downtime of individual Virtual Machines:

• Planned maintenance events that require restarts of Hyper-V hosts where VMs run. While most Azure
platform updates are transparent to Platform as a Service (PaaS) and IaaS services, some might
involve reboots of Hyper-V hosts, which affect the availability of their VM guests.

• Hardware failures. Although Microsoft designed the Azure platform to be highly resilient, a hardware
failure might affect one or more Hyper-V hosts and their VM guests.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 3-11

Deployment of VMs into the same availability set provides resiliency against planned maintenance events
by associating VMs in the same availability set with different update domains. Similarly, the placement
improves resiliency against hardware failures by associating VMs in the same availability with different
fault domains.

Update domains
An availability set consists of up to 20 update domains (you can increase this number from its default of
five). Each update domain represents a set of physical hosts that Azure Service Fabric can update and
restart at the same time without affecting overall availability of VMs grouped in the same availability set.

When you assign more than five VMs to the same availability set (assuming the default settings), the sixth
VM is placed in the same update domain as the first VM, the seventh is placed in the same update domain
as the second VM, and so on. During planned maintenance, only hosts in one of these five update
domains are shut down concurrently, while hosts in the other four remain online. There is a 30-minute
interval between shutdowns of VMs in consecutive update domains.

Fault domains
A fault domain represents a group of Hyper-V hosts that can experience downtime due a localized
hardware failure (such as a failure of a power unit or a top-of-rack network switch). The platform
provisions Virtual Machines in the same availability set across up to three fault domains.

Implementing availability sets for Virtual Machines


To implement an availability set, you can use the Azure portal, Azure PowerShell, Azure CLI, or Azure
Resource Manager templates. When using the Azure portal, you can create a new availability set while
deploying a new VM, or create a new availability set first and then add VMs to it. Note, however, that you
must add a VM to an availability set at the deployment time.

Note: You cannot add an existing Virtual Machines to an availability set. You can specify
that a Virtual Machine will be part of an availability set only during its deployment.

To create an availability set in the Azure portal, you must specify the following settings:
• Name. A unique sequence of up to 80 characters, starting with either a letter or a number, followed
by letters, numbers, underscores, dashes, or periods, and ending with a letter, a digit, or an
underscore.

• Resource Group. A resource group into which you will deploy the Virtual Machines that will become
part of the availability set.

• Location. The Azure region that is hosting the VMs that will be part of the availability set.

• Fault domains. The number of fault domains (up to three) associated with the availability set.

• Update domains. The number of update domains (up to 20) associated with the availability set.

• Managed. An indication that the availability set will host the VMs that use managed disks. With the
introduction of managed disks, it is possible to create two types of availability sets–managed and
unmanaged. In a managed availability set, all VMs use exclusively managed disks. In an unmanaged
availability set, all VMs use exclusively unmanaged disks. A managed availability set automatically
ensures additional resiliency at the Azure Storage level.

Note: You will learn about managed disks later in this lesson.
MCT USE ONLY. STUDENT USE PROHIBITED
3-12 Implementing Microsoft Azure Virtual Machines and virtual machine scale sets

Azure PowerShell provides an alternative approach to managing availability sets. The following cmdlets
handle creating, modifying, and removing availability sets, respectively:

New-AzureRmAvailabilitySet
Set-AzureRmAvailabilitySet
Remove-AzureRmAvailabilitySet

To perform the same tasks via Azure CLI, you would use the following commands:

az vm availability-set create
az vm availability-set update
az vm availability-set delete

Virtual machine scale set availability and autoscaling Virtual machine scale sets use placement groups,
which are functionally equivalent to availability sets. Each placement group can contain up to 100 VMs,
automatically distributed across five fault domains and five update domains.

You can manually increase or decrease the number of VMs in a scale set. You can also configure
horizontal autoscaling of VMs according to a custom schedule or based on performance-based rules.
These rules can reference two types of metrics:

• Host metrics, such as percentage CPU, Network In, Network Out, Disk Read Bytes, Disk Write Bytes,
Disk Read Operations/Sec, or Disk Write Operations/Sec. The metrics represent an average value
across all VMs in the scale set. Since they are available to the Hyper-V hosts, they do not require any
additional VM-level components.
• Guest OS metrics, which correspond to the operating system performance counters and provide more
detailed insight into the state of individual VMs in the scale set. To configure autoscaling based on
any of the guest OS metrics, you must first install the Azure diagnostics extension in each VM.

Understanding availability zones


Azure availability zones allow you to place Virtual Machines and virtual machine scale sets that run the
same workload in up to three separate datacenters within the same Azure region. This provides increased
resiliency when compared with availability sets. To provide load balancing across VM availability zones,
you can use an internal or external Azure Standard Load Balancer.
You specify the availability zone while provisioning an Azure VM. When you use the Azure portal, this will
automatically create a managed disk for this VM and a Standard SKU public IP address in the same zone.
The static public IP address is optional. Alternatively, you can provide connectivity from the internet to an
individual Azure VM in an availability zone by taking advantage of the Network Address Translation (NAT)
functionality of an Azure Standard Load Balancer.

Note: For more information regarding Azure Standard Load Balancer, refer to Module 2,
“Implementing and managing Azure networking.”

Considerations for virtual machine availability


When configuring availability sets for Virtual Machines:
• Configure two or more virtual machines in an availability set or availability zone for redundancy. The
primary purpose of an availability set is to provide resiliency to failure of a single virtual machine.
• Configure each application tier as a separate availability sets. If virtual machines in your deployment
provide the same functionality, such as web service or database management system, you should
configure them as part of the same availability set. This will ensure that at least one VM in each tier is
always available.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 3-13

• Wherever applicable, combine load balancing with availability sets or availability zones. You can
implement an Azure Basic Load Balancer in conjunction with an availability set or an Azure Standard
Load Balancer in conjunction with an availability zone to distribute incoming traffic.

Single VM availability
Availability sets provide resiliency for workloads that can run side-by-side on multiple Virtual Machines in
the active-active or active-passive modes. However, there are applications and services that do not
support this type of configuration. While you can install them on individual VMs, you forfeit the benefits
associated with availability sets and availability zones. Fortunately, even in such cases, the Azure platform
provides the availability SLA of 99.9% if you ensure that all VM disks reside in Premium storage.

Handling maintenance of Virtual Machines


By default, subscription owners and co-owners receive email notifications about upcoming maintenance
events that require reboots of Virtual Machines. You can include additional recipients of these
notifications, with support for text messages) and webhooks. Each notification marks the beginning of a
self-service window, during which customers can move their Virtual Machines to Hyper-V hosts for which
the maintenance period has already passed. The VM will still need to restart, as this happens whenever
Virtual Machines move between Hyper-V hosts. However, customers can choose their own schedule for
host maintenance.

This self-service approach is not necessary when your workload runs on two or more VMs in the same
availability set. However, you might still choose to use it in some scenarios, including the following:
• The interval between restarts of VMs in the consecutive update domains must be longer than 30
minutes.

• Your workload must remain highly available during the maintenance window.

• You need to control the sequence in which VMs restart.


You can view the current maintenance status of Virtual Machines directly in the portal on the Virtual
Machines blade. For each Virtual Machines, the Maintenance column will contain one of the following
values:
• Start now. This indicates that you can initiate self-maintenance from the VM blade.

• Scheduled. This indicates that VM maintenance will occur, but without the option to initiate self-
maintenance.

• Completed. This indicates successful completion of maintenance.

• Skipped. This indicates that the self-maintenance attempt has failed. As a result, the VM’s downtime
will occur during the original maintenance schedule.
MCT USE ONLY. STUDENT USE PROHIBITED
3-14 Implementing Microsoft Azure Virtual Machines and virtual machine scale sets

Virtual Machines and virtual machine scale set storage


The operating system and data virtual disk .vhd
files of Virtual Machines and virtual machines that
belong to virtual machine scale sets reside in
Azure Storage in the form of blobs. In addition to
blobs, Azure Storage can also store tables, queues,
and files.

Note: This module focuses on the Azure


Storage capabilities applicable to Virtual
Machines. You will learn more about Azure
Storage and objects that are not Virtual Machine–
specific in detail in Module 6, “Planning and
implementing Azure Storage.”

Azure offers two tiers of Azure Storage capable of storing .vhd files—Standard and Premium. In both
cases, Virtual Machine disks take the form of page blobs because page blobs are optimized for random
read-write access. In general, page blobs can be up to 8 TB in size. However, the maximum size of a VM
data disk you can create and attach to an Virtual Machine is 4 TB. For operating system disks, this limit
is 2 TB.

.vhd files in Azure Storage represent one of two object types—images or disks. An image is a generalized
copy of an operating system, which allows you to create any number of VMs, each with its own unique
characteristics. A disk object is either a non-generalized operating system disk or a data disk. You can use
a copy of an operating system disk to create an exact replica of an individual VM. You can also attach a
data disk to an existing Virtual Machine to access its content.

Images serve as templates from which you provision disks for an Virtual Machines and virtual machines in
a virtual machine scale set during their deployment. There are numerous ready-to-use images available to
you from the Marketplace. You can create your own images either by uploading .vhd files from your on-
premises environment and registering them as images, or by creating them from existing Virtual
Machines.

To identify individual images, Azure Resource Manager relies on several parameters, including:
• Publisher name. For example, MicrosoftWindowsServer.

• Offer. For example, WindowsServer.

• SKU. For example, 2016-Datacenter.

• Version. For example, a specific version, such as 2016.127.20170406, or the latest one, which you can
designate by setting the value of the version parameter to latest.

You can use these parameters to identify available images that match your requirements when running
the Get-AzureRmVMImage cmdlet.

Virtual Machines and virtual machines in virtual machine scale sets support three types of disks:

• Operating system disks:


o One per VM

o Maximum size of 2 TB

o Labeled as drive C on Windows VMs and mounted as /dev/sda1 on Linux VMs


MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 3-15

o Appears to the operating system in the VM as a Serial Advanced Technology Attachment (SATA)
drive

o Contains the operating system

• Temporary disks:

o One per VM
o The size depends on the VM size

o Labeled as drive D on Windows VMs or mounted as /mnt/resource on Linux VMs (/mnt in case of
Ubuntu)

o Provides temporary, nonpersistent storage (hosting the paging file, by default)

The content of the temporary disk is lost if the Hyper-V server hosting a Virtual Machine changes.
This can be a side effect of events such as resizing the Virtual Machine, temporarily stopping and
deallocating the Virtual Machine, or Hyper-V server failure.

o On most VM sizes (with exception of Basic and Standard A0-A7), it uses SSD storage

• Data disks:

o VM size determines the maximum number of data disks that you can attach to the VM

o Maximum size of 4 TB
o You can assign any available drive letter starting with F (on Windows VMs) or mount it via a
custom mount point on Linux VMs

o Appears to the operating system in the VM as a small computer system interface (SCSI) drive

o Provides persistent storage for applications and data

Operating system and data disks are implemented as page blobs in Azure Storage. The temporary disk is
implemented as local storage on the Hyper-V host where the VM is running.

Overview of unmanaged and managed disks


An Azure Storage account is a logical namespace
that, depending of its type, is capable of hosting
different types of objects, including blobs, tables,
queues, and files. You can create a storage
account by using a variety of methods, including
the Azure portal, Azure PowerShell, Azure CLI, and
Azure Resource Manager templates. Storage
accounts provide the persistent store for virtual
machine disks in Azure.

When deploying a Virtual Machine, you must


choose the type of disks that will host the
operating system disk and, optionally, data disks.
You can use the unmanaged or managed disk type. Your decision has important implications in terms of
functionality, manageability, and pricing.

With unmanaged disks, you must create Azure Storage accounts where Virtual Machine disks will reside.
You have to decide how many storage accounts you will create and how you will distribute .vhd disk files
across them.
MCT USE ONLY. STUDENT USE PROHIBITED
3-16 Implementing Microsoft Azure Virtual Machines and virtual machine scale sets

These extra considerations do not constitute a significant overhead if the number of Virtual Machines is
relatively small. However, with a larger number of Virtual Machines, the management of storage accounts
results in increased complexity for two reasons:

• The maximum number of Azure Storage accounts per region is limited to 250.

• A single Standard Azure Storage account has a performance limit of 20,000 IOPS. Because the Azure
platform allocates 500 IOPS per single Standard storage disk, this creates the practical limit of 40
concurrently active disks per single Azure Storage account.

In addition to capacity and performance, choosing unmanaged or managed disks also affects resiliency.
The Azure platform, by default, synchronously replicates each Azure Storage account across three
locations within a storage cluster consisting of multiple racks of servers. These clusters are referred to as
storage stamps. Clustering protects an Virtual Machine’s disk against a hardware failure affecting a
physical rack. This type of resiliency is typically sufficient when provisioning storage for a standalone
Virtual Machine. However, when provisioning two or more VMs into the same availability set, you should
ensure that their respective storage accounts do not reside in the same storage stamp.

Note: You can determine whether two storage accounts reside in the same storage stamp
by resolving their fully qualified DNS names to the corresponding IP addresses. If the IP addresses
are different, then the storage accounts reside in different storage stamps. You cannot explicitly
request the placement of a storage account in a different storage stamp when using standard
Azure management tools such as the Azure portal, Azure PowerShell, or Azure CLI. If this is
necessary, you can submit a request to Azure support for help with this task.

By using managed disks, you no longer have to consider these factors. With this approach, the Azure
platform controls the placement of VM disk files and hides the complexity associated with managing
Azure Storage accounts. Managed disks provide the following capacity, performance, and resiliency
improvements:

• The limit on the number of Azure Storage accounts per subscription no longer applies. Instead, there
is a limit of 10,000 managed disks per region and per disk type. This means that you can create up to
10,000 standard managed disks and 10,000 premium managed disks in each Azure region.

• The performance limits of individual Standard Azure Storage accounts are no longer relevant.

• The Azure platform automatically distributes managed disks across different storage stamps for
Virtual Machines in the same availability set and for virtual machines in the same placement group of
virtual machine scale sets.

Managed disks also provide other functional benefits. For example, you can convert a managed disk
between Standard and Premium storage directly from the Azure portal. You can also create an Virtual
Machine from a custom image stored in any storage account in the same region and the same
subscription. With unmanaged disks, you must store Virtual Machine disks in the same storage
account as the image.

Note: In some cases, these benefits might result in extra cost. When using Standard storage
with unmanaged disks, you pay only for the space you use. With managed disks, you pay for the
full capacity of a disk, regardless of the disk space that is in use. When using Premium storage,
the cost always represents the full capacity of a disk, for both unmanaged and managed disks.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 3-17

The managed disks feature applies in a uniform way to all VMs in the same availability set. You might
recall that an availability set has a Managed property that determines its support for managed disks. This
means that you cannot mix VMs with unmanaged disks and VMs with managed disks in the same
availability set. Similarly, you cannot attach a mix of unmanaged and managed disks to the same Virtual
Machine. The same restrictions apply to virtual machine scale sets.

Note: Make sure to choose the disk type that you intend to use when you deploy a Virtual
Machine. You can convert unmanaged disks of Virtual Machines to managed disks; however, this
requires stopping and deallocating all VMs in the availability set. There is no support for
converting managed disks to unmanaged disks.

Managed disks are available in the following sizes:

• For Premium storage:


o P4 (32 GB)

o P6 (64 GB)

o P10 (128 GB)


o P20 (512 GB)

o P30 (1 TB)

o P40 (2 TB)

o P50 (4 TB)

• For Standard storage:

o S4 (32 GB)
o S6 (64 GB)

o S10 (128 GB)

o S20 (512 GB)


o S30 (1 TB)

o S40 (2 TB)

o S50 (4 TB)

Note: When you deploy Azure VMs into availability zones, you must use managed disks.

Additional Reading: For more information about managed disks, refer to: “Azure
Managed Disks Overview” at: https://aka.ms/d9rm4w

When planning for virtual machine disk configuration, you should note that storage-related charges are
calculated according to four criteria:

• Total amount of disk space represents the amount of storage you use (with standard unmanaged
disks) or allocate (with standard managed disks, premium unmanaged disks, and premium managed
disks).
MCT USE ONLY. STUDENT USE PROHIBITED
3-18 Implementing Microsoft Azure Virtual Machines and virtual machine scale sets

• Replication topology determines how many copies of your data are concurrently maintained and the
number of Azure regions in which they are located. Note that unmanaged premium disks, managed
standard disks, and managed premium disks support Locally Redundant Storage only. You can store
unmanaged standard disks in a Geo Redundant Storage or Read-Access Geo Redundant Storage
account. This incurs additional charges.

Note: For more information about Azure Storage replication, refer to Module 6, “Planning
and implementing Azure storage.”

• Transaction volume refers to the number of read and write operations performed against a storage
account. This applies only to standard unmanaged disks.

• Data egress refers to data transferred out of an Azure region. When services or applications and the
storage account they are using are not located in the same Azure region, you will incur charges for
data egress. Note that this never applies to a Virtual Machine or virtual machine scale sets, because
the storage accounts hosting their .vhd files must reside in the same region. However, you should
consider the location of a Virtual Machine and virtual machine scale set in relation to other services in
your Azure environment.

Demonstration: Creating an availability set by using the Azure portal


In this demonstration, you will see how to create an availability set by using the Azure portal.

Question: What types of workloads running currently in your on-premises environment are
suitable for migration to Virtual Machines and virtual machine scale sets?
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 3-19

Lesson 3
Deploying Virtual Machine and virtual machine scale sets
You can deploy Virtual Machine and virtual machine scale sets by using several methods, including the
Azure portal, Azure PowerShell, Azure CLI, or Azure Resource Manager templates. This lesson describes
the primary methods for deploying Virtual Machines and demonstrates the use of these methods.

Lesson Objectives
After completing this lesson, you will be able to:

• Identify the methods of creating Virtual Machines and virtual machine scale sets.

• Explain how to deploy a Virtual Machine and virtual machine scale set by using the Azure portal.
• Explain how to deploy a Virtual Machine and virtual machine scale set by using Azure PowerShell.

• Explain how to deploy a Virtual Machine and virtual machine by using Azure CLI.

• Explain how to deploy a Virtual Machine and virtual machine scale set by using an Azure Resource
Manager template.

• Create a Virtual Machine and virtual machine scale set by using the Azure portal.

Determining the Virtual Machine and virtual machine scale set deployment
method
Azure Resource Manager provides several
methods of deploying Virtual Machine and virtual
machine scale sets, including the following:
• Azure portal. This method is straightforward
because it automatically provisions most
common configuration options. For example,
for Virtual Machines, it automatically
configures connectivity via a public IP address
and either the Remote Desktop Protocol
(RDP) or Secure Shell (SSH) protocol,
according to your choice of operating system.
On the other hand, this method has limited
flexibility. For example, you cannot create a Virtual Machine from an unmanaged custom image or
attach a data disk or an additional network adapter during provisioning. This method is also not
suitable for deploying a large number of Virtual Machines. In case of virtual machine scale sets, you
do not have the option of deploying it into a subnet of an existing virtual network.

• Azure PowerShell. This method offers automation and full flexibility, allowing you to create multi-
network adapter and multidisk Virtual Machines and virtual machine scale set configurations from
either Marketplace-based or custom images. Simultaneous deployment of multiple Virtual Machines
is possible, but the speed of deployment is not optimal.

• Azure CLI. This method is equivalent to using Azure PowerShell, in terms of flexibility and automation
capabilities. The choice between the two is dependent primarily on the preference of the person
carrying out the deployment.
• Azure Resource Manager templates. This method provides full flexibility and the best performance for
large deployments of Virtual Machine and virtual machine scale sets.
MCT USE ONLY. STUDENT USE PROHIBITED
3-20 Implementing Microsoft Azure Virtual Machines and virtual machine scale sets

Using images to create virtual machines


As mentioned earlier in this module, you can deploy a new Virtual Machine or virtual machine scale set by
using either an image or a disk. Deploying multiple Virtual Machines from a single set of .vhd files requires
the use of images. The same requirement applies to deployment of virtual machine scale sets. You can
choose between ready-to-use images from the Marketplace and custom images that you generate from
on-premises computers or Virtual Machines. The next module will cover the process of uploading on-
premises .vhd files to Azure Storage. This topic focuses on the process of generating images from Virtual
Machines.

Image generation starts with generalizing the operating system. To generalize a Windows operating
system in a Virtual Machine, run:

Sysprep /oobe /generalize /shutdown

This will automatically shut down the operating system once the generalization process completes.

To generalize a Linux operating system in a Virtual Machine, run:

Sudo waagent –deprovision+user -force

The +user command line switch removes the most recently provisioned user account. The remainder of
the process depends on whether you are using unmanaged or managed disks. In the first case, the
process will result in an unmanaged image, and in the second, a managed image. In both cases, the image
will contain all the disks attached to the VM, including any data disks.

Generating an unmanaged image from a Virtual Machine with unmanaged disks by


using Azure PowerShell
To generate an unmanaged image from a Virtual Machine with unmanaged disks by using Azure
PowerShell, use the following steps:

1. From Azure PowerShell, authenticate to the Azure subscription where the Virtual Machine resides.

2. Deallocate the resources of the Virtual Machine that you are capturing by running:

Stop-AzureRmVM –ResourceGroupName <ResourceGroupName> -Name <VMName> -Force

3. Set the status of the virtual machine that you are capturing to Generalized by running:

Set-AzureRmVM –ResourceGroupName <ResourceGroupName> -Name <VMName> -Generalized

This will automatically remove the Virtual Machine, making its .vhd file ready for image generation.

4. Generate an image in the same storage account by running:

Save-AzureRmVMImage -ResourceGroupName <ResourceGroupName> -VMName <VMName> -


DestinationContainerName <ContainerName> -VHDNamePrefix <PrefixName> -Path <JSONFile>

5. This will create an image in the container that you specified within the storage account hosting the
Virtual Machine disks. The command will also create a file at the location referenced by the –Path
parameter containing JavaScript Objection Notation (JSON) representation of the new image’s
configuration. The file contains the uri parameter of the image. You will need to assign its value to the
–SourceImageUri parameter of Set-AzureRmImageVMOsDisk during provisioning of an Virtual
Machine based on this image.

Note: When provisioning a new VM with unmanaged disks based on a custom image, the
disks must reside in the same storage account as the image.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 3-21

Additional Reading: For more information about creating unmanaged VM images, refer
to: “How to create an unmanaged VM image from an Azure VM” at: http://aka.ms/Cey939

Additional Reading: For description of the equivalent procedure when using Azure CLI,
refer to: “How to create an image of a virtual machine or VHD” at: https://aka.ms/ybb17g

Generating a managed image from a Virtual Machine with managed disks by using
Azure PowerShell
1. From Azure PowerShell, authenticate to the Azure subscription where the Virtual Machine resides.

2. Deallocate the resources of the Virtual Machine that you are capturing by running:

Stop-AzureRmVM –ResourceGroupName <ResourceGroupName> -Name <VMName> -Force

3. Set the status of the virtual machine that you are capturing to Generalize by running:

Set-AzureRmVM –ResourceGroupName <ResourceGroupName> -Name <VMName> -Generalized

This will automatically remove the Virtual Machine, making its .vhd file ready for image generation.
4. Store a reference to the Virtual Machine in a variable by running:

$vm = Get-AzureRmVM –ResourceGroupName <ResourceGroupName> -Name <VMName>

5. Create the image configuration by running:

$image = New-AzureRmImageConfig –Location $vm.Location –SourceVirtualMachineId $vm.ID

6. Generate the image in the same Azure region by running:

New-AzureRmImage –Image $image –ImageName <ImageName> -ResourceGroupName


<ResourceGroupName>

7. This will create an image in the same region as the Virtual Machine. You need this image to provision
virtual machines. To use Azure PowerShell to provision a Virtual Machine based on the custom image,
you use the Set-AzureRmVMSourceImage cmdlet. To use Azure PowerShell to provision a virtual
machine scale set based on the custom image, you use the Set-AzureRmVmssOsProfile cmdlet.

Additional Reading: For more information about creating managed VM images, refer to:
“Create a managed image of a generalized VM in Azure” at: https://aka.ms/bqmp6g
MCT USE ONLY. STUDENT USE PROHIBITED
3-22 Implementing Microsoft Azure Virtual Machines and virtual machine scale sets

Using the Azure portal to create Virtual Machines and virtual machine
scale sets
Creating a new Virtual Machine or a virtual
machine scale set by using the Azure portal is a
relatively straightforward process, although it does
involve several steps. You must be familiar with
these steps to implement the most optimal
configuration.

Creating a Virtual Machine by using the


Azure portal
When creating a Virtual Machine, the first step is
to choose the target operating system image. The
Marketplace contains images of various Microsoft
and Linux operating systems, products, and even
ready-to-use multiserver solutions. For example, you can select a basic Windows Server installation or a
specific product, which will be preinstalled with the server. If you are performing a Linux installation, you
can select from multiple versions of a number of distributions, including:

• CentOS
• CoreOS

• Debian

• Oracle Linux

• Red Hat Enterprise

• SUSE Linux Enterprise

• openSUSE
• Ubuntu

When you create a Windows VM by using the Azure portal, you can specify the following settings:

• VM name. This setting designates the name assigned to the operating system instance.

• VM disk type. This setting designates the storage type of the operating system disk. You can choose
either SSD or hard disk drive (HDD). The first option provisions the operating system disk by using
Premium storage. The second one provisions the operating system disk by using Standard storage.

• User name. This setting designates the name of the local administrative account that you will use
when you manage the server.

• Password. This setting designates the password of the administrative account.

• Subscription. This setting determines the subscription to which you deploy the VM.

• Resource group. This setting specifies the name of the resource group that will contain the VM and
its resources (such as virtual network adapters). You can create a new resource group when you
deploy the Virtual Machine or place it in an existing one.

• Location. This setting identifies the name of the Azure region where the Hyper-V systems hosting
your VM reside.
• Licensing model. This setting allows customers with Software Assurance to use the Hybrid Benefit
licensing model to minimize licensing costs.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 3-23

• VM size. This setting identifies the pricing tier, performance, and functional capabilities of the VM (as
described in the previous lesson of this module).

• High availability: This setting allows you to specify whether the Azure VM should belong to an
availability set or reside in a specific availability zone.

• Storage. This setting allows you to choose between managed and unmanaged disks. If you choose
unmanaged disks, you must specify the name of a new or existing Azure Storage account, and the
name of a container within it that will host the operating system disk of the VM. If you specified that
the Azure VM should reside in an availability zone, you must create a new managed disk or use an
existing disk.

• Virtual network. This setting identifies the virtual network in Azure to which the VM is automatically
connected. This allows for direct communication with other VMs on the same virtual network or
other, directly connected virtual networks.

• Subnet. This setting identifies the subnet within the virtual network. The private IP address of the VM
is part of the subnet IP address space.
• Public IP address. This setting allows you to (optionally) provide an internet-accessible IP address to
facilitate connectivity to the VM from:

o Outside Azure, including on-premises environments or non-Microsoft cloud providers.

o Other Azure services that are not part of the same virtual network as the VM or any other
network connected to that virtual network.

You can use either the Basic or Standard SKU of a public IP address resource. However, keep in mind
that only the Standard SKU supports availability zone-related functionality.

Note: For more information about the differences between the Standard and Basic public
IP address SKUs, refer to Module 2, “Implementing and managing Azure networking.”

• Network security group. This setting lets you restrict network connectivity to and from the Virtual
Machine. By default, the network security group in this case allows connectivity from the internet to
TCP port 3389 of the Virtual Machine. The intention of this configuration is to permit inbound RDP
sessions after the VM deployment is complete. You can change the default settings if they do not suit
your requirements.

• Extensions. This setting allows you to configure an operating system and applications that run in the
VM after its deployment is complete, providing custom management capabilities.

• Auto-shutdown. This setting allows you to set a custom daily schedule for automatic shutdown of
the Virtual Machine. You can enable shutdown notifications, which facilitate postponing or bypassing
shutdown on an as-needed basis.

• Monitoring. Once enabled, this setting triggers the collection of performance and diagnostics data
that you can use to track and troubleshoot issues affecting VM workloads.

• Diagnostics storage account. This setting designates an Azure Storage location where the
performance and diagnostics data will reside.

• Backup. This setting allows you to configure automatic backups of the Virtual Machine. After
enabling it, you will also need to provide the Azure Recovery Services vaults where the backups will
reside and specify the backup policy, which determines frequency and retention of backups.
MCT USE ONLY. STUDENT USE PROHIBITED
3-24 Implementing Microsoft Azure Virtual Machines and virtual machine scale sets

When you create a Linux VM, the settings are mostly the same. There are two primary differences:

• You can choose between the password-based and SSH public key–based authentication.

• When using managed disks, you can choose the OS disk size.

• The default network security group allows connectivity from the internet to port 22 on the VM. The
intention of this configuration is to permit SSH sessions after the VM deployment is complete. You
can change the default settings if they do not suit your requirements.

The default settings yield a configuration that is suitable in most scenarios (which might not be optimal,
depending on your security needs). In particular, the new VM will have a public IP address and allow
connectivity via either RDP (in the case of a Windows image) or SSH (for Linux distributions) from any
system with internet access. Obviously, to successfully sign in to the operating system of an Virtual
Machine, you must know the administrative credentials. In the case of Linux VMs that you configured with
SSH-based authentication, you must have access to the private key of the SSH key pair.

Creating a virtual machine scale set by using the Azure portal


When creating a virtual machine scale set by using the Azure portal, you can specify the following
settings:

• Virtual machine scale set name. This setting designates the name assigned to the virtual machine
scale set. This name will serve as the prefix of the names of individual VMs within the scale set. The
suffix of the VM names consists of the underscore character and a random instance ID, which
uniquely identifies each VM.

• Operating system disk image. This setting designates the operating system running on the VMs
within the scale set. At the time of authoring this course, you can choose any of the following
operating systems:

o Windows Server 2016 Datacenter


o Windows Server 2016 Datacenter – with Containers

o Windows Server 2012 R2 Datacenter

o Windows Server 2012 Datacenter

o Windows Server 2008 R1 SP1

o CentOS-based 6.8

o CentOS-based 7.2
o CoreOS Linux (Beta)

o CoreOS Linux (Stable)

o Debian 7 “Wheezy”
o Debian 7 “Jessie”

o Red Hat Enterprise Linux 6.8

o Red Hat Enterprise Linux 7.2

o SUSE Linux Enterprise 11 SP4

o SUSE Linux Enterprise 12 SP3

o Ubuntu Server 14.04 LTS

o Ubuntu Server 16.04 LTS


MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 3-25

• Subscription. This setting determines the subscription to which you deploy the VM.

• Resource group. This setting specifies the name of the resource group that will contain the VM and
its resources (such as virtual network adapters). You can create a new resource group when you
deploy the Virtual Machine or place it in an existing one.

• Location. This setting identifies the name of the Azure region where the Hyper-V systems hosting
your VM reside.

• Availability zone: This setting allows you to specify whether the virtual machine scale set should
reside in a specific availability zone.

• User name. This setting designates the name of the local administrative account that you will use
when you manage the server.
• Password or SSH public key. This setting designates the password of the administrative account or
the public key of an SSH key pair, if you chose a Linux operating system.

• Instance count. This setting designates the number of virtual machines in the scale set.

• Instance size. This setting designates the size of virtual machines in the scale set.
• Enable scaling beyond 100 instances. This setting sets the value of the singlePlacementGroup
property of the VM scale set to False. This enforces the usage of managed disks. If you set the value
of this setting to No, you can use unmanaged disks.
• Use managed disks. This setting is available if you retained the default value of No for the Enable
scaling beyond 100 instances setting. At this point, you can decide whether to use unmanaged or
managed disks. Otherwise, the use of managed disks is mandatory and this setting is no longer
configurable.

• Public IP address name. This setting designates the name of the public IP address resource through
which you will be able to connect to the Azure load balancer in front of the virtual machine scale set.

• Domain name label. This setting requires you to specify a unique DNS name that the platform will
associate with the public IP address.
• Autoscale. This setting allows you to enable and configure autoscaling of the virtual machine scale
set. If you enable it, then you will also need to provide:

o The minimum number of VMs.

o The maximum number of VMs.

o CPU threshold (%) and the number of VMs to increase by for scaling out.

o CPU threshold (%) and the number of VMs to decrease by for scaling in.
MCT USE ONLY. STUDENT USE PROHIBITED
3-26 Implementing Microsoft Azure Virtual Machines and virtual machine scale sets

Using Azure PowerShell to create Virtual Machines and virtual machine


scale sets

Using Azure PowerShell to create Virtual


Machines with managed disks from a
Windows Server 2016 Marketplace
image
To create a Virtual Machine with managed disks
from a Windows Server 2016 Marketplace image
by using Azure PowerShell, perform the following
steps:
1. Open the Azure PowerShell console and
sign in to Azure:

Login-AzureRmAccount

2. List the names of Azure subscriptions associated with your account:

Get-AzureRmSubscription | Sort-Object SubscriptionName | Select-Object


SubscriptionName

3. Select the target subscription:

Select-AzureRmSubscription -SubscriptionName "<subscription name>"

where <subscription name> is the name of the subscription that you identified in the list from step 2
and to which you want to deploy the Virtual Machine.

4. Create a resource group that will contain the Virtual Machine and all objects associated with it:

$resourceGroup = New-AzureRmResourceGroup -ResourceGroupName <resource group name> -


Location <Azure region>

where <resource group name> is the name you want to assign to the resource group and <Azure
region> is its location.

5. Create a subnet configuration:

$subnetConfig = New-AzureRmVirtualNetworkSubnetConfig `
-Name <subnet name> `
-AddressPrefix <subnet IP address prefix>

where <subnet name> is the name you want to assign to the subnet to which you will deploy the
Virtual Machine and <subnet IP address prefix> is the IP address range of that subnet.

6. Create a virtual network:

$vnet = New-AzureRmVirtualNetwork `
-ResourceGroupName myResourceGroupVM `
-Location $resourceGroup.location `
-Name <vnet name> `
-AddressPrefix <vnet IP address space> `
-Subnet $subnetConfig

where <vnet name> is the name you want to assign to the virtual network containing the subnet you
created in the previous step and <vnet IP address prefix> is its IP address space.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 3-27

7. Create a public IP address:

$pip = New-AzureRmPublicIpAddress `
-ResourceGroupName $resourceGroup.Name `
-Location $resourceGroup.location `
-AllocationMethod Static `
-Name <public IP address name>

where <public IP address name> is the name you want to assign to the public IP address that will be
associated with the Virtual Machine.

8. Create a network adapter:

$nic = New-AzureRmNetworkInterface `
-ResourceGroupName $resourceGroup.Name `
-Location $resourceGroup.location `
-Name <network adapter name> `
-SubnetId $vnet.Subnets[0].Id `
-PublicIpAddressId $pip.Id

where <network adapter name> is the name you want to assign to the network adapter that will be
attached to the Virtual Machine.

9. Create a Network Security Group (NSG) rule allowing inbound RDP traffic:

$nsgRule = New-AzureRmNetworkSecurityRuleConfig `
-Name <NSG rule name> `
-Protocol Tcp `
-Direction Inbound `
-Priority 1000 `
-SourceAddressPrefix * `
-SourcePortRange * `
-DestinationAddressPrefix * `
-DestinationPortRange 3389 `
-Access Allow

where <NSG rule name> is the name you want to assign to the NSG rule.
10. Create an NSG:

$nsg = New-AzureRmNetworkSecurityGroup `
-ResourceGroupName $resourceGroup.Name `
-Location $resourceGroup.location `
-Name <NSG name> `
-SecurityRules $nsgRule

where <NSG name> is the name you want to assign to the NSG.
11. Associate the NSG with the subnet you created:

Set-AzureRmVirtualNetworkSubnetConfig `
-Name <subnet name> `
-VirtualNetwork $vnet `
-NetworkSecurityGroup $nsg `
-AddressPrefix <subnet IP address prefix>

12. Apply the update to the virtual network:

Set-AzureRmVirtualNetwork -VirtualNetwork $vnet


MCT USE ONLY. STUDENT USE PROHIBITED
3-28 Implementing Microsoft Azure Virtual Machines and virtual machine scale sets

13. Set administrative credentials for the operating system within the Virtual Machine:

$cred = Get-Credential

This command will prompt you for a user name and password, and will store your response in the
$cred variable.

14. Create an initial configuration of the Virtual Machine and store it in the variable $vm:

$vm = New-AzureRmVMConfig -VMName <VM name> -VMSize <VM size>

where <VM name> is the name you want to assign to the VM and <VM size> is its intended size.

15. Assign the operating system and credentials to the VM configuration:

$vm = Set-AzureRmVMOperatingSystem `
-VM $vm `
-Windows `
-ComputerName myVM `
-Credential $cred `
-ProvisionVMAgent -EnableAutoUpdate

16. Add the Windows Server 2016 Marketplace image information to the VM configuration:

$vm = Set-AzureRmVMSourceImage `
-VM $vm `
-PublisherName MicrosoftWindowsServer `
-Offer WindowsServer `
-Skus 2016-Datacenter `
-Version latest

17. Add the operating system disk settings to the VM configuration:

$vm = Set-AzureRmVMOSDisk `
-VM $vm `
-Name <OS disk name> `
-DiskSizeInGB <OS disk size> `
-CreateOption FromImage `
-Caching ReadWrite

where <OS disk name> is the name you want to assign to the operating system (OS) disk and
<code>OS disk size> is its intended size.

18. Add the network adapter to the VM configuration:

$vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $nic.Id

19. Finally, create the virtual machine:

New-AzureRmVM -ResourceGroupName $resourceGroup.Name -Location


$resourceGroup.location -VM $vm

Note: If you want to provision a Virtual Machine quickly, with minimal customization, use
the Quick Start option.

Additional Reading: For more information on creating Virtual Machines by using Azure
PowerShell, refer to: “Create and Manage Windows VMs with the Azure PowerShell module” at:
https://aka.ms/og8f5z
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 3-29

Using Azure PowerShell to create a Virtual Machine with managed disks from a
custom Windows image
To create a Virtual Machine with managed disks from a custom image by using Azure PowerShell, perform
the following steps:

1. Open the Azure PowerShell console and sign in to Azure:

Login-AzureRmAccount

2. List the names of Azure subscriptions associated with your account:

Get-AzureRmSubscription | Sort-Object SubscriptionName | Select-Object


SubscriptionName

3. Select the target subscription:

Select-AzureRmSubscription -SubscriptionName "<subscription name>"

where <subscription name> is the name of the subscription that you identified in the list from step 2
and to which you want to deploy the Virtual Machine.

4. Use the steps described earlier in this topic to perform the following tasks:
o Create a virtual network and its subnet.

o Create a public IP address.

o Create a network adapter.

o Create an NSG with a rule allowing inbound RDP traffic.

o Store OS admin credentials in a variable.

o Initiate the VM configuration.


5. Collect information about the image:

$rgName = <resource group name>


$location = <Azure region>
$imageName = <image name>
$image = Get-AzureRMImage -ImageName $imageName -ResourceGroupName $rgName

6. Set the VM image as the source image for the new Virtual Machine by assigning the image ID to the
VM configuration:

$vm = Set-AzureRmVMSourceImage -VM $vm -Id $image.Id

7. Assign the OS disk to the VM configuration:

$vm = Set-AzureRmVMOSDisk -VM $vm `


-StorageAccountType <PremiumLRS or StandardLRS> `
-DiskSizeInGB <disk size in GB> `
-CreateOption FromImage `
-Caching <ReadWrite, ReadOnly, or None>
$vm = Set-AzureRmVMOperatingSystem
-VM $vm `
-Windows `
-ComputerName $computerName `
-Credential $cred `
-ProvisionVMAgent -EnableAutoUpdate
MCT USE ONLY. STUDENT USE PROHIBITED
3-30 Implementing Microsoft Azure Virtual Machines and virtual machine scale sets

8. Use the steps described earlier in this topic to add the network adapter to the VM configuration.

9. Create the VM:

New-AzureRmVM -ResourceGroupName $resourceGroup.Name -Location


$resourceGroup.location -VM $vm

Additional Reading: For more information on creating a Virtual Machine from a custom
managed image by using Azure PowerShell, refer to: “Create a VM from a managed image” at:
https://aka.ms/yty166

Additional Reading: For information on using Azure PowerShell to create Azure VMs in
availability zones, refer to: “Create a Windows virtual machine in an availability zone with
PowerShell” at: https://aka.ms/rogpdm

Using Azure PowerShell to create a virtual machine scale set with managed disks
from a Windows Server 2016 Marketplace image
To create a single placement group–based virtual machine scale set with managed disks from a
Marketplace image by using Azure PowerShell, perform the following high-level steps:

1. Create a resource group by running the New-AzureRmResourceGroup cmdlet.

2. Create a virtual network and a subnet that will host the virtual machine scale set by using the New-
AzureRmVirtualNetwork and New-AzureRmVirtualNetworkConfig cmdlets.
3. Create a public IP address that you will subsequently associate with the frontend IP address of the VM
scale set.

4. Create and configure an Azure load balancer by running the New-AzureRmLoadBalancer cmdlet. As
part of this step, you must define components of the load balancer by running the following cmdlets:
o New-AzureRmLoadBalancerFrontendIpConfig to define frontend IP configuration.

o New-AzureRmLoadBalancerBackendAddressPoolConfig to define the backend address pool.


o New-AzureRmLoadBalancerInboundNatPoolConfig to define NAT configuration, which will
facilitate connectivity to individual VMs in the scale set.

o New-AzureRmLoadBalancerBackendAddressPoolConfig to define the backend address pool.

o New-AzureRmLoadBalancerProbeConfig to define the health probe of the load balancer.

o New-AzureRmLoadBalancerRuleConfig to define load-balancing rules of the load balancer.

5. Define the IP configuration of the VM scale set by running the New-AzureRmVmssIpConfig cmdlet.
This cmdlet references the load balancer backend address pool, the inbound NAT pool, and the
subnet where the VM scale set resides.

6. Create the VM scale set by running the New-AzureRmVmss cmdlet. As part of this step, you must
define components of the VM scale set by running the following cmdlets:

o New-AzureRmVmssConfig to create an object representing the VM scale set configuration.

o New-AzureRmVmssStorageProfile to reference an operating system image that the platform


will use to provision VMs in the scale set.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 3-31

o New-AzureRmVmssOsProfile to define the operating system configuration of the scale set VMs,
including the credentials of the local administrator and the prefix of the VM names.

o New-AzureRmVmssNetworkInterfaceConfiguration to associate the load balancer network


configuration with the VM scale set IP configuration.

Additional Reading: For more information on creating a Virtual Machine from a


Marketplace image by using Azure PowerShell, refer to: “Quickstart: Create a virtual machine
scale set with Azure PowerShell” at: https://aka.ms/Krkt5i

Additional Reading: For information on using Azure PowerShell to create virtual machine
scale sets in availability zones, refer to: “Create a virtual machine scale set that uses Availability
Zones” at: https://aka.ms/Crl8t6

Using Azure CLI to create a Virtual Machine and virtual machine scale set

Using Azure CLI to create Virtual


Machines with managed disks from a
Marketplace Linux image
To create a Virtual Machine with managed disks
from a Marketplace image by using Azure CLI,
perform the following steps:
1. Sign in to Azure:

az login

2. Set your subscription:

az account set –subscription <subscription name>

where <subscription name> is the name of the Azure subscription into which you intend to deploy a
Virtual Machine.

3. Create a resource group:

az group create –name <resource group name> –location <Azure region>

where <resource group name> is the name of the resource group that will host the Virtual Machine
and <Azure region> is its location.

4. Create the Virtual Machine:

az vm create –resource-group <resource group name> –name <VM name> –image <Azure
Marketplace image> –generate-ssh-keys

This command generates SSH keys for subsequent authentication to the Linux OS.

Note: Azure CLI creates managed disks automatically during an image-based deployment.
MCT USE ONLY. STUDENT USE PROHIBITED
3-32 Implementing Microsoft Azure Virtual Machines and virtual machine scale sets

Note: This process is more straightforward than the one described in the previous topic.
This approach applies a number of defaults regarding, for example, the naming of such objects as
the virtual network and its subnet, in addition to characteristics of these objects, such as the
virtual network’s IP address space and the subnet’s IP address range.

Additional Reading: For more information on a detailed procedure that allows you to
specify custom parameters of all VM-related objects, refer to: “Create a complete Linux virtual
machine with the Azure CLI” at: https://aka.ms/q1ngyt

Additional Reading: For information on creating Virtual Machines with managed disks
from a custom image by using Azure CLI, refer to: “Create a custom image of an Azure VM using
the CLI” at: https://aka.ms/d7ymfm

Using Azure CLI to create a virtual machine scale set with managed disks from a
Marketplace Linux image
To create a single placement group-based virtual machine scale set with managed disks from a
Marketplace image by using Azure PowerShell, perform the following high-level steps:

1. Create a resource group by running the az group create command.

2. Create the virtual machine scale set by running the az vmss create command. This command accepts
the following parameters:
o –resource-group designates the resource group where the virtual machine scale sets will reside.

o –name designates the name of the virtual machine scale set.

o –image designates the image that the platform will use to provision the VMs in the scale set.
o –admin-username designates the name of the administrative user.

o –generate-ssh-keys triggers automatic generation of the SSH public and private keys
This approach relies on a number of defaults that the az vmss create command facilitates. You can
accept the default values or you can assign values explicitly by including relevant parameters, such as
–vnet-name, –vnet-address-prefix, –subnet, –subnet-address-prefix, or –vm-sku.

Additional Reading: For more information on creating virtual machine scale sets with
managed disks from a Marketplace image by using Azure CLI, refer to: “Quickstart: Create a
virtual machine scale set with the Azure CLI 2.0” at: https://aka.ms/ji0hgx

Additional Reading: For information on using Azure CLI to create virtual machine scale
sets in availability zones, refer to: “Create a virtual machine scale set that uses Availability Zones”
at: https://aka.ms/Crl8t6
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 3-33

Creating Virtual Machines and virtual machine scale sets by using


deployment templates
Azure Resource Manager templates provide the
most flexible and efficient deployment option for
Virtual Machines and virtual machine scale sets.
The complexity of their implementation depends
largely on the extent to which you intend to
customize the target configuration.

The following code is an example of a complete


template that defines deployment of an Virtual
Machine based on the latest Windows Server 2016
Datacenter image:

{
"$schema": "https://schema.management.azure.com/schemas/2015-01-
01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"adminUsername": { "type": "string" },
"adminPassword": { "type": "string" },
"vmSize": { "type": "string" },
"domainName“: { "type": "string" }
},
"variables": {
"vnetID": "[resourceId('Microsoft.Network/virtualNetworks','vnet0')]",
"subnetRef": "[concat(variables('vnetID'),'/subnets/subnet0')]"
},
"resources": [
{
"apiVersion": "2017-10-01",
"type": "Microsoft.Network/publicIPAddresses",
"name": "vm0pip0",
"location": "[resourceGroup().location]",
"properties": {
"publicIPAllocationMethod": "Dynamic",
"dnsSettings": {
"domainNameLabel": "[parameters('domainName')]"
}
}
},
{
"apiVersion": "2017-10-01"",
"type": "Microsoft.Network/virtualNetworks",
"name": "vnet0",
"location": "[resourceGroup().location]",
"properties": {
"addressSpace": { "addressPrefixes": [ "192.168.0.0/20" ] },
"subnets": [
{
"name": "subnet0",
"properties": { "addressPrefix": "192.168.0.0/24" }
}
]
}
},
{
"apiVersion": "2017-10-01"",
"type": "Microsoft.Network/networkInterfaces",
"name": "vm0nic0“,
"location": "[resourceGroup().location]",
MCT USE ONLY. STUDENT USE PROHIBITED
3-34 Implementing Microsoft Azure Virtual Machines and virtual machine scale sets

"dependsOn": [
"[resourceId('Microsoft.Network/publicIPAddresses/', 'vm0pip0')]",
"[resourceId('Microsoft.Network/virtualNetworks/', 'vnet0')]"
],
"properties": {
"ipConfigurations": [
{
"name": "ipconfig0",
"properties": {
"privateIPAllocationMethod": "Dynamic",
"publicIPAddress": { "id":
"[resourceId('Microsoft.Network/publicIPAddresses','vm0pip0')]" },
"subnet": { "id": "[variables('subnetRef')]" }
}
}
]
}
},
{
"apiVersion": "2017-12-01",
"type": "Microsoft.Compute/virtualMachines",
"name": "vm0",
"location": "[resourceGroup().location]",
"dependsOn": [
"[resourceId('Microsoft.Network/networkInterfaces/', 'vm0nic0')]"
],
"properties": {
"hardwareProfile": { "vmSize": "[parameters('vmSize')]" },
"osProfile": {
"computerName": "vm0",
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]"
},
"storageProfile": {
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "2016-Datacenter",
"version": "latest"
},
"osDisk": {
"name": "vm0disk0",
"caching": "ReadWrite",
"createOption": "FromImage"
}
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces',' vm0nic0')]"
}
]
}
}
}
]
}

The template contains four parameters that allow providing during deployment time the credentials of
the Windows local administrative account, the size of the Virtual Machine, and the DNS name of the
public IP address. It also contains two variables that provide references to the virtual network and its
subnet where the Virtual Machine will reside. It defines an Virtual Machine that uses managed disks and
which is accessible from internet via a dynamic public IP address. The location of the Virtual Machine and
all of its resources will match the Azure region of the resource group that will host them.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 3-35

Note: The sample templates in this topic are not fully parameterized for the sake of
simplicity.

Additional Reading: For more information regarding creating Virtual Machines by using
Azure Resource Manager templates, refer to: “Create a Windows virtual machine from a Resource
Manager template” at: http://aka.ms/Bt1gf6

The following code is an example of a complete template that defines deployment of a virtual machine
scale set based on the latest Windows Server 2016 Datacenter image:

{
"$schema": "https://schema.management.azure.com/schemas/2015-01-
01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"adminUsername": { "type": "string" },
"adminPassword": { "type": "string" },
"vmSize": { "type": "string" },
"capacity": { "type": "int" },
"domainName": { "type": "string" }
},
"variables": {
},
"resources": [
{
"type": "Microsoft.Network/virtualNetworks",
"name": "vnet0",
"location": "[resourceGroup().location]",
"apiVersion": "2017-10-01",
"properties": {
"addressSpace": {
"addressPrefixes": "192.168.0.0/20"
]
},
"subnets": [
{
"name": "subnet0",
"properties": {
"addressPrefix": "192.168.0.0/24"
}
}
]
},
{
"type": "Microsoft.Network/publicIPAddresses",
"name": "vmss0pip0",
"location": "[resourceGroup().location]",
"apiVersion": "2017-10-01",
"properties": {
"publicIPAllocationMethod": "Dynamic",
"dnsSettings": {
"domainNameLabel": "[parameters('domainName')]"
}
}
},
{
"type": "Microsoft.Network/loadBalancers",
"name": "vmss0lb0",
"location": "[resourceGroup().location]",
"apiVersion": "2017-10-01",
"dependsOn": [
"[concat('Microsoft.Network/publicIPAddresses/',vmss0pip0))]"
],
"properties": {
MCT USE ONLY. STUDENT USE PROHIBITED
3-36 Implementing Microsoft Azure Virtual Machines and virtual machine scale sets

"frontendIPConfigurations": [
{
"name": "vmss0lb0fe",
"properties": {
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses',vmss0pip0))]"
}
}
}
],
"backendAddressPools": [
{
"name": "vmss0lb0be"
}
],
"inboundNatPools": [
{
"name": "vmss0lb0nat",
"properties": {
"frontendIPConfiguration": {
"id":
"[concat("[resourceId('Microsoft.Network/loadBalancers',vmss0lb0))]",'/frontendIPConfigura
tions/vmss0lb0fe')]"
},
"protocol": "tcp",
"frontendPortRangeStart": "50000",
"frontendPortRangeEnd": "50119",
"backendPort": "3389"
}
}
]
}
},
{
"type": "Microsoft.Compute/virtualMachineScaleSets",
"name": "vmss0",
"location": "[resourceGroup().location]",
"apiVersion": "2017-12-01",
"dependsOn": [
"[concat('Microsoft.Network/loadBalancers/',“vmss0lb0“)]",
"[concat('Microsoft.Network/virtualNetworks/',“vnet0))]"
],
"sku": {
"name": "[parameters('vmSize')]",
"capacity": "[parameters('capacity')]"
},
"properties": {
"overprovision": "true",
"upgradePolicy": {
"mode": "Automatic"
},
"virtualMachineProfile": {
"storageProfile": {
"osDisk": {
"caching": "ReadWrite",
"createOption": "FromImage"
},
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "2016-Datacenter",
"version": "latest"
}
},
"osProfile": {
"computerNamePrefix": "vmss0",
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]"
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 3-37

},
"networkProfile": {
"networkInterfaceConfigurations": [
{
"name": "nic0",
"properties": {
"primary": "true",
"ipConfigurations": [
{
"name": "ipconfig0",
"properties": {
"subnet": {
"id": "[concat('/subscriptions/',
subscription().subscriptionId,'/resourceGroups/', resourceGroup().name,
'/providers/Microsoft.Network/virtualNetworks/', 'vnet0', '/subnets/', 'subnet0')]"
},
"loadBalancerBackendAddressPools": [
{
"id": "[concat('/subscriptions/',
subscription().subscriptionId,'/resourceGroups/', resourceGroup().name,
'/providers/Microsoft.Network/loadBalancers/', 'vmss0lb0', '/backendAddressPools/',
'vmss0lb0be')]"
}
],
"loadBalancerInboundNatPools": [
{
"id": "[concat('/subscriptions/',
subscription().subscriptionId,'/resourceGroups/', resourceGroup().name,
'/providers/Microsoft.Network/loadBalancers/', 'vmss0lb0',
'/inboundNatPools/',“vmss0lb0nat“))]"
}
]
}
}
]
}
}
]
}
}
}
}
]
}

The template contains four parameters that allow providing during deployment time the credentials of
the Windows local administrative account, the size of VMs in the scale set, their count, and the DNS name
of the public IP address. It defines a virtual machine scale set that uses managed disks. The location of the
virtual machine scale set and all of its resources will match the Azure region of the resource group that
will host them. The template also includes a definition of an Azure load balancer and its network
configuration, which links to the network profile of the VM scale set. Note that the template does not
include autoscale settings. This would require addition of a Microsoft.Insights/autoscaleSettings resource.

Additional Reading: For more information regarding creating virtual machine scale sets by
using Azure Resource Manager templates, refer to: “Quickstart: Create a Windows virtual machine
scale set with an Azure template” at: https://aka.ms/Rsm6a7 and “Quickstart: Create a Linux
virtual machine scale set with an Azure template” at: https://aka.ms/pc6blz

Additional Reading: For information on using Azure Resource Manager templates to


create virtual machine scale sets in availability zones, refer to: “Create a virtual machine scale set
that uses Availability Zones” at: https://aka.ms/Crl8t6
MCT USE ONLY. STUDENT USE PROHIBITED
3-38 Implementing Microsoft Azure Virtual Machines and virtual machine scale sets

Demonstration: Creating a Virtual Machine and virtual machine scale set


by using the Azure portal
In this demonstration, you will see how to create a Virtual Machine and virtual machine scale set from a
Marketplace image by using the Azure portal.

Question: Why is an Azure Resource Manager template beneficial for deploying multiple
virtual machines?
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 3-39

Lab: Deploying Virtual Machines


Scenario
As part of the planning for deployment of Virtual Machines to Azure, Adatum Corporation has evaluated
its deployment options. You must use the Azure portal and Azure PowerShell to deploy two Microsoft
Virtual Machines for the database tier of the Research and Development application. To facilitate resource
tracking, you should ensure that the virtual machines are part of the same resource group. Both VMs
should be part of the same availability set.
You must use an Azure Resource Manager template to deploy two additional Linux VMs and two
additional Windows VMs that the ResDev application will use. The virtual machines should be part of the
resource group, to facilitate resource tracking. Linux virtual machines should reside on the virtual
networks’ app subnet, and Windows virtual machines should reside on the web subnet of the
20533E0301-LabVNet virtual network.

Objectives
After completing this lab, you will be able to:
• Create Virtual Machines by using the Azure portal and Azure PowerShell.

• Validate virtual machine creation.

• Use Visual Studio and an Azure Resource Manager template to deploy Azure Resource Manager
virtual machines.

• Use Azure PowerShell and an Azure Resource Manager template to deploy virtual machines.

Note: The lab steps for this course change frequently due to updates to Microsoft Azure.
Microsoft Learning updates the lab steps frequently, so they are not available in this manual. Your
instructor will provide you with the lab documentation.

Lab Setup
Estimated Time: 40 minutes
Virtual machine: 20533E-MIA-CL1

User name: Student

Password: Pa55w.rd
The virtual machine should be running from the previous lab.

Question: What differences regarding Virtual Machine storage did you notice when you
created a virtual machine in the Azure portal versus in Azure PowerShell?

Question: Can Microsoft Visual Studio and Azure PowerShell use the same Azure Resource
Manager template to deploy a Virtual Machine?

Question: How would you configure an Azure Resource Manager template to deploy
multiple Virtual Machines with different configurations?
MCT USE ONLY. STUDENT USE PROHIBITED
3-40 Implementing Microsoft Azure Virtual Machines and virtual machine scale sets

Module Review and Takeaways


Best Practices
• Use Azure Resource Manager deployment model for new deployments.

• Use Azure Resource Manager resource groups to organize Virtual Machines within your subscription.

• Use a consistent naming convention for your Azure IaaS infrastructure.

• Use Azure Resource Manager templates to deploy and modify Virtual Machines.

Review Questions

Question: Can you migrate on-premises virtual machines directly to Azure?

Question: What tools can you use to create and modify Azure Resource Manager templates?
MCT USE ONLY. STUDENT USE PROHIBITED
4-1

Module 4
Managing Azure VMs
Contents:
Module Overview 4-1
Lesson 1: Configuring Azure VMs 4-2

Lesson 2: Managing disks of Azure VMs 4-10

Lesson 3: Managing and monitoring Azure VMs 4-17


Lab: Managing Azure VMs 4-28

Module Review and Takeaways 4-29

Module Overview
Configuration, management, and monitoring of Microsoft Azure virtual machines (VMs) are essential in
delivering secure, available, and scalable Azure-based infrastructure solutions. This module presents some
of the most common techniques that allow you to administer and maintain Azure VMs to better suit your
custom requirements.

Objectives
After completing this module, you will be able to:

• Configure Azure VMs.

• Manage Azure VM disks.

• Manage and monitor Azure VMs.


MCT USE ONLY. STUDENT USE PROHIBITED
4-2 Managing Azure VMs

Lesson 1
Configuring Azure VMs
Azure VMs are one of the core components of Microsoft Azure infrastructure as a service (IaaS)
deployments. In this lesson, you will look at the different options for configuring availability, scalability,
and performance of Azure VMs.

Lesson Objectives
After completing this lesson, you will be able to:

• Explain how to connect to an Azure VM.

• Explain how to connect to Linux Azure VMs via Secure Shell (SSH).
• Describe how to scale Azure VMs.

• Configure security of Azure VMs.

Demonstration: Preparing the lab environment


Perform the tasks in this demonstration to prepare the lab environment. The environment will be
configured as you progress through this module, learning about the Azure services that you will use in the
lab.

Important: The scripts used in this course might delete objects that you have in your
subscription. Therefore, you should complete this course by using new Azure subscriptions. You
should also use a new Microsoft account that is not associated with any other Azure subscription.
This will eliminate the possibility of any potential confusion when running setup scripts.

This course relies on custom Azure PowerShell modules, including Add-20533EEnvironment to prepare
the lab environment for labs, and Remove-20533EEnvironment to perform clean-up tasks at the end of
the module.

Connecting to an Azure VM
To manage an Azure VM, you can use the same
set of tools that you used to deploy it. However,
you will also want to interact with an operating
system (OS) running within the VM. The methods
you can use to accomplish this are OS-specific and
include the following options:

• Remote Desktop Protocol (RDP) allows you to


establish a graphical user interface (GUI)
session to an Azure VM that runs any
supported version of Windows. The Azure
portal automatically enables the Connect
button on the Azure Windows VM blade if
the VM is running and accessible via a public or private IP address, and if it accepts inbound traffic on
TCP port 3389. After you click this button, the portal will automatically provision an .rdp file, which
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 4-3

you can either open or download and save for later use. Opening the file initiates an RDP connection
to the corresponding VM. The Azure PowerShell Get-AzureRmRemoteDesktopFile cmdlet provides
the same functionality.

• Windows Remote Management (WinRM) allows you to establish a command-line session to an Azure
VM that runs any supported version of Windows. You can also use WinRM to run noninteractive
Windows PowerShell scripts. WinRM facilitates additional session security by using certificates. You
can upload a certificate that you intend to use to Azure Key Vault prior to establishing a session. The
process of setting up WinRM connectivity includes the following, high-level steps:

o Creating a key vault.

o Creating a self-signed certificate.

o Uploading the certificate to the key vault.

o Identifying the URL of the certificate uploaded to the key vault.

o Referencing the URL in the Azure VM configuration.


WinRM uses by TCP port 5986 by default, but you can change it to a custom value. In either case, you
must ensure that no network security groups are blocking inbound traffic on the port that you
choose.

Additional Reading: For more information, refer to: “Setting up WinRM access for Virtual
Machines in Azure Resource Manager” at: https://aka.ms/ljezi1

• SSH allows you to establish a command-line interface session to an Azure VM that runs the Linux OS.
To do so from a Windows computer, you typically use a terminal emulator, such as PuTTY. Most Linux
distributions offer an OpenSSH package. Several open source and non-Microsoft SSH client programs
are available for both Windows and Linux.
• RDP for Linux allows you to establish a GUI session to an Azure VM that runs any supported version
of the Linux OS. This functionality relies on the xfce4 desktop environment and the xrdp Remote
Desktop server. If you configure a Linux VM with SSH authentication, you must also assign a password
to the Linux administrative user account. In addition, you must ensure that no network security
groups are blocking traffic on TCP 3389.

Additional Reading: For more information, refer to: “Install and configure Remote Desktop
to connect to a Linux VM in Azure” at: https://aka.ms/tkvozt

Note: If you forget the OS administrative credentials, you can reset them by using the VM
Access extension. This includes changing an SSH certificate for Linux VMs. You will learn about
VM extensions in lesson 3 of this module.

Note: You can facilitate connectivity to an Azure VM from the internet in two ways:

• Assign a public IP address to one of its network adapter.


• Place the VM behind an internet-facing load balancer and configure a network address translation
(NAT) rule that directs incoming traffic on a designated port to the appropriate port of the OS within
the VM.
MCT USE ONLY. STUDENT USE PROHIBITED
4-4 Managing Azure VMs

Demonstration: Connecting to a Linux Azure VM via SSH


In this demonstration, you will see how to connect to a Linux Azure VM via SSH.

Scaling Azure VMs


In general, there are two methods of scaling Azure
VMs:

• Vertically. You scale by changing the VM size.

• Horizontally. You scale by changing the


number of VMs that host the same workload
and share their load through load balancing.

Vertical scaling
As mentioned in the previous module, you can
change a VM size, if your current configuration
does not violate the constraints of the VM size
that you intend to use. For example, you might
need to remove an extra virtual network adapter or a data disk attached to your VM before you scale it
down to a smaller size.

Note: Changing an Azure VM’s size requires a restart if the new size is part of the same
compute cluster. If that is not the case, resizing will require stopping (deallocating) the Azure VM.
If that VM is part of an availability set, you will need to stop all VMs in the same availability set
and resize them simultaneously.

Horizontal scaling
The most common way to implement horizontal scaling of Azure VMs uses virtual machine scale sets. A
scale set consists of a group of Windows or Linux VMs that share identical configurations and deliver the
same functionality to support a service or application. With scale sets, you can increase or decrease the
number of VMs dynamically, to adjust to changes in demand for the workload they host. To avoid data
loss due to deprovisioning of VMs during scaling in, the workload should be stateless. VMs in the same
scale set are automatically distributed across five fault domains and five update domains.

Scale sets integrate with Azure load balancers to handle dynamic distribution of network traffic across
multiple VMs. They also support the use of NAT rules for connectivity to individual VMs in the same scale
set.

From a storage perspective, you can configure scale sets with either managed or unmanaged disks. Using
managed disks offers additional scalability benefits. With managed disks, when using an Azure
Marketplace image to provision a VM scale set, you can scale out up to 1000 VMs. With unmanaged disks,
the upper limit is 100 VMs per scale set. When using custom images, managed disks allow you to scale
out up to 300 VMs. With unmanaged standard storage disks, you should limit your deployment to 20
VMs. You can increase this number to 40 if you set the overprovision property of the VM scale set to
false. This way, you ensure that the aggregate Input/Output Operations Per Second (IOPS) of virtual disks
in the VM scale set stays below the 20,000-IOPS limit of a single standard Microsoft Azure Storage
account.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 4-5

Additional Reading: For more information, refer to: “What are virtual machine scale sets
in Azure?” at: http://aka.ms/xl3xw5

Implementing scale sets


To provision a VM scale set, you can use the Azure portal, Azure PowerShell, Azure Command-Line
Interface (CLI), or Azure Resource Manager templates. The templates reference the Microsoft.Compute
/virtualMachineScaleSets resource type. This resource type implements many scale set properties,
including:
• sku.tier. The size of the Azure VMs in the scale set.

• sku.capacity. The number of VM instances that the scale set will autoprovision.

• properties.virtualMachineProfile. The disk, OS, and network settings of the Azure VMs in the scale
set.

To configure Autoscale, the template must reference the Microsoft.Insights/autoscaleSettings resource


type. Some of the more relevant properties that this resource type implements include:

• metricName. The name of the performance metric that determines whether to trigger horizontal
scaling (for example, Percentage central processing unit [CPU]).

• metricResourceUri. The resource identifier designating the scale set.

• timeGrain. The frequency with which performance metrics are collected (between one minute and 12
hours).

• Statistic. The method of calculating aggregate metrics from multiple Azure VMs (Average, Minimum,
Maximum).

• timeWindow. Range of time for metrics calculation (between five minutes and 12 hours).
• timeAggregation. The method of calculating aggregate metrics over time (Average, Minimum,
Maximum, Last, Total, Count).

• Threshold. The value that triggers the scale action. For example, if you set it to 50 when using the
Percentage CPU metricName, the number of Azure VMs in the set would increase when the CPU
usage exceeds 50 percent. The details of the method used to evaluate when the threshold is reached
depend on other properties, such as statistic, timeWindow, or timeAggregation).
• Operator. The criterion that determines the method of comparing collected metrics and the
threshold (Equals, NotEquals, GreaterThan, GreaterThanOrEqual, LessThan, LessThanOrEqual).

• Direction. The type of horizontal scaling invoked as the result of reaching the threshold (increase or
decrease, representing scaling out or scaling in, respectively).
• Value. The number of Azure VMs added to or removed from the scale set (one or more).

• Cooldown. The amount of time to wait between the most recent scaling event and the next action
(from one minute to one week).

Additional Reading: For more information on scale sets, refer to: “Advanced autoscale
configuration using Resource Manager templates for scale sets” at: https://aka.ms/Lmmv02
MCT USE ONLY. STUDENT USE PROHIBITED
4-6 Managing Azure VMs

Configuring security of Azure VMs


Azure offers many technologies that help to keep
customer computing environments secure. In this
topic, you will learn about the additional security
measures that you can implement by leveraging
Azure capabilities.

Restricting access to Azure VMs from


the internet
For security reasons, you might want to prevent
connectivity to an Azure VM from the internet. To
accomplish this, ensure that there is no public IP
address assigned to the default network adapter
of the Azure VM and there are no NAT rules
providing such connectivity via a load balancer. You will still be able to connect to the OS within the
Azure VM if the computer from which you initiate the connection can reach any of the private IP
addresses assigned to the Azure VM network adapters.

If preventing internet connectivity to an Azure VM is not an option, you can reduce the scope of IP
addresses from which a connection to that VM can originate. To do so, modify the network security group
rule that allows incoming traffic via the relevant port. This is feasible if you know the IP address
representing the public endpoint of the computers from which you intend to establish a remote
management session. In addition, you can control both inbound and outbound network traffic by using
an OS-level firewall.
Each Windows VM created by using an Azure Marketplace image has its local firewall enabled. By default,
Windows Defender Firewall has enabled the rule that allows incoming RDP connections. If you want to
allow connectivity for applications or services that listen on a different port, you should configure
Windows Defender Firewall accordingly.

Similarly, Azure network security groups associated with an Azure VM that you create by using the Azure
portal include a rule allowing connectivity via RDP or SSH (depending on the VM’s OS), by default. To
enable connections on other ports, add extra rules to the security group.

Azure offers services that allow you to further secure access to an Azure VM’s OS and disks. These services
include Azure Key Vault and Azure Disk Encryption.

Understanding Key Vault


Key Vault stores cryptographic keys and secrets, such as keys of Azure Storage accounts, connection
strings containing user credentials, or passwords securing private keys. The vault maintains its contents in
encrypted form, relying on hardware security module (HSM)–based protection.

A secret is a small data blob (of up to 10 kilobytes [KB] in size) that authorized users and applications can
add to the vault, or view, modify, and delete while the secret resides in the vault. To authorize users and
applications, you must grant secret-specific key vault access policy permissions to their respective Azure
Active Directory (Azure AD) identities. You also must ensure that the Azure AD tenant hosting these
identities is associated with the Azure subscription hosting the key vault.

Unlike secrets, keys stored in a vault are not directly readable. Instead, when you add a key to the vault,
authorized users and applications can invoke cryptographic functions which perform operations that
require knowledge of that key. The ability to complete such invocation is also subject to a successful
Azure AD–based authentication. To access keys and secrets, users and applications must possess valid
Azure AD tokens representing security principals with sufficient permissions to the target vault. To assign
these permissions, you use key-specific key vault access policy permissions.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 4-7

Note: You apply both secret and key-specific access control permissions at the key vault
level. There is no support for object-level permissions. There is a limit of 16 access policy control
entries for a key vault.

Key Vault supports two types of keys:

• RSA. With this key type, the key vault performs cryptographic operations in software. However, while
at rest, the key resides in HSM.

• RSA-HSM. With this key type, the key vault performs cryptographic operations by using HSM. While
at rest, the key also resides in HSM.

Every secret and key residing in Azure Key Vault has a unique identifier, which you must reference when
attempting to access it. In addition, it is possible to assign several additional attributes to keys to
customize their usage, such as:

• exp. An expiration date for the key, after which it is no longer possible to retrieve it from the vault.
• nbf. A date on which the key becomes accessible.

• enabled. A Boolean value that determines whether the key is accessible (assuming that the access
attempt occurs between the dates set by the values of the nbf and exp parameters).
Secrets support the contentType attribute in the form of a string of up to 255 characters, which you can
use to describe their purpose.

Note: To delegate management of a key vault, use Role-Based Access Control (RBAC). Note
that RBAC assignments do not control access to individual secrets or keys. To grant access to keys
and secrets, you must use access policies.

Using Key Vault


You can use REST API, Azure PowerShell, or Azure CLI to retrieve secrets and public parts of keys (in
JavaScript Object Notation [JSON] format) from a vault. You can also perform other management tasks
targeting keys (create, import, update, delete, list, backup, or restore) and secrets (set, list, or delete). In
addition, each of these methods allows you to manage the vault and its properties. The following
Windows PowerShell cmdlets facilitate interaction with an Azure Key Vault:

• New-AzureRmKeyVault. Creates a new Key Vault.

• Add-AzureKeyVaultKey. Creates a new—or imports an existing—key into a Key Vault.


• Get-AzureKeyVaultKey. Retrieves a public part of a key from a Key Vault.

• Get-AzureKeyVaultSecret. Retrieves a secret from a Key Vault.

• Remove-AzureKeyVaultKey. Removes a key from a Key Vault.

To accomplish the same tasks by using Azure CLI, run the following commands:

• az keyvault create

• az keyvault key create

• az keyvault key show

• az keyvault secret show

• az keyvault key delete


MCT USE ONLY. STUDENT USE PROHIBITED
4-8 Managing Azure VMs

Additional Reading: For more information, refer to: “Get started with Azure Key Vault” at:
http://aka.ms/Wnz2hb

Using Azure Disk Encryption


Azure Disk Encryption is a capability built into the Azure platform that allows you to encrypt file system
volumes residing on Windows and Linux Azure VM disks. Azure Disk Encryption leverages existing file
system–based encryption technologies already available in the guest OS, such as BitLocker in Windows
and DM-Crypt in Linux. It uses these technologies to provide encryption of volumes hosting the OS and
data. The solution integrates with Key Vault to store volume encryption keys securely. You can also
encrypt these keys by utilizing the vault’s key encryption key functionality. The combination of these
features enhances security of Azure VM disks at rest by encrypting their content.

Note: It is possible to encrypt the data (but not the OS) volumes of Azure VMs running
Windows by using BitLocker without relying on Azure Disk Encryption. You can also encrypt any
volume (including the OS volume) by implementing non-Microsoft solutions offered on Azure
Marketplace, such as CloudLink SecureVM. Additionally, you can combine Azure Disk Encryption
with Azure Storage Service Encryption, which encrypts all the content of the storage account.

You can use Azure Disk Encryption in three scenarios, all which are applicable to Azure Resource Manager
deployments of Standard-tier Azure VMs:

• Enabling encryption on new Azure VMs created from a customer-encrypted virtual hard disk (.vhd
file) by using existing encryption keys.

• Enabling encryption on new Azure VMs created from Azure Marketplace images.

• Enabling encryption on existing Azure VMs that are already running in Azure.

Note: Azure Disk Encryption supports both managed and unmanaged disks.

Azure Disk Encryption is not supported for:

• Basic-tier Azure VMs.


• Classic Azure VMs.

• Integration with on-premises Key Management Service.

• Content of Azure Files (Azure file share), network file system (NFS), dynamic volumes, and software-
based Redundant Array of Independent Disks (RAID) volumes on Azure VMs. There is support for
encryption of volumes created by using Storage Spaces on Windows VMs and by using either mdadm
or Logical Volume Manager (LVM) on Linux VMs.

• Disabling encryption on the OS drive for Linux VMs. For Linux VMs, you can disable encryption on
data drives. For Windows VMs, you can disable encryption on both OS and data drives.

Azure Disk Encryption requires additional steps to provide the Azure platform with access to the Azure
Key Vault where secrets and encryption keys will reside. In particular, you must enable the Enable access
to Azure Disk Encryption for volume encryption advanced access policy on the vault. When applying
encryption to new or existing volumes, you also must provision an Azure AD application with write
permissions to the vault. This application provides a security context for the Azure platform, allowing it to
securely store newly generated cryptographic material. In addition, you must configure the vault access
policy to allow the Microsoft.Compute resource provider and Azure Resource Manager to retrieve its
secrets during VM deployments. Finally, you must enable encryption on new or existing Azure Resource
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 4-9

Manager Azure VMs. Details of this last step depend on which of the three scenarios you are
implementing and which deployment methodology you are using.

Additional Reading: For more information, refer to: “Azure Disk Encryption for Windows
and Linux IaaS VMs” at: http://aka.ms/Jvkb03

Additional Reading: For more information about Azure’s general security practices, refer
to: http://aka.ms/Guhssp

Check Your Knowledge


Question

What is the maximum number of fault domains that scale sets support?

Select the correct answer.

Two

Three

Five

20

50
MCT USE ONLY. STUDENT USE PROHIBITED
4-10 Managing Azure VMs

Lesson 2
Managing disks of Azure VMs
Azure VMs use disks for different purposes, including OS, data, and temporary storage. In this lesson, you
will learn about management and configuration of these disks. You will also learn how to attach new and
existing disks to an Azure VM, and how to configure multi-disk volumes in Windows and Linux VMs.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe different methods of managing Azure VM disks.

• Describe Azure support for VM disk mobility.


• Describe how to manage disk volumes in Azure VMs.

• Configure storage in Windows and Linux VMs.

Managing VM disks
When creating a VM based on an image, the
Azure platform will automatically provision a new
OS disk. Alternatively, you can create a new Azure
VM based on an existing disk. You would do this
when you migrate a VM from your on-premises
environment to Azure. Similarly, you can attach
one or more new or existing data disks to an
Azure VM, up to the limit determined by its size.

Attaching a disk to an Azure VM


To attach a disk to an Azure VM, you can use a
variety of methods, including the Azure portal,
Azure PowerShell, Azure CLI, or Azure Resource
Manager templates.

When using the Azure portal, take the following steps:

1. Navigate to the blade of the Azure VM to which you want to attach new disks.

2. On the VM blade, click Disks, and then click Add data disk. When using managed disks, you will
then be able to select any currently available managed disk in the same region and subscription, or
use the Create disk option.

3. Depending on the type of disks currently attached to the VM, the Azure portal will display either the
Create managed disk or Attach unmanaged disk blade. With unmanaged disks, you will be able to
choose either New (empty disk) or Existing blob as the source type. With managed disks, your
choices are Snapshot, Storage blob, and None (empty disk). With unmanaged disks, when
referencing a new or existing blob, you must provide the storage account and its container. Similarly,
with managed disks, when selecting a source blob, you will need to specify its exact location,
including the storage account and container. In addition, you will have to specify whether the blob is
a data disk or whether it contains the Windows or Linux OS installation. When using a snapshot as the
new disk’s source, you simply select the name of an existing snapshot in the same subscription and
region as the Azure VM.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 4-11

Note: Managed disks simplify snapshot management. The Azure portal allows you to
create snapshots from existing managed disks and create new disks from an existing snapshot.
The snapshot creation is almost instantaneous. Note that at the time of authoring this content,
managed disks support only full snapshots. Unmanaged disks offer support for both full and
incremental snapshots.

The same functionality is available when using Azure PowerShell or Azure CLI. For example, to attach a
new unmanaged data disk by using Azure PowerShell, you would run the following commands:

Add-AzureRmVMDataDisk –ResourceGroupName <Resource Group name> –VM <VM object> -Name <Disk
name> -VhdUri <URI of the blob representing the disk in the storage account> -CreateOption
Empty –DiskSizeInGB <Disk size in GB> -LUN <LUN number> -Caching <ReadOnly, ReadWrite, or
None>
Update-AzureRmVM –ResourceGroupName <Resource Group name> -VM <VM object>

To attach a new managed disk by using Azure PowerShell, you would run the following commands:

$mdConfig1 = New-AzureRmDiskConfig -AccountType <PremiumLRS or StandardLRS> -Location


<Azure region> -CreateOption Empty -DiskSizeGB <disk size in GB>
$md1 = New-AzureRmDisk -DiskName <Disk name> -Disk $mdConfig1 -ResourceGroupName <Resource
Group name>
Add-AzureRmVMDataDisk -VM <VM object> -Name <Disk name> -CreateOption Attach -
ManagedDiskId $md1.Id -Lun <LUN number> -Caching <ReadOnly, ReadWrite, or
Update-AzureRmVM –ResourceGroupName <Resource Group name> -VM <VM object>

Additional Reading: For more information, refer to: “Attach a data disk to a Windows VM
using PowerShell” https://aka.ms/sn9u6r

To create a snapshot by using Azure PowerShell, run the New-AzureRmSnapshot cmdlet. If you prefer
Azure CLI, you can use azure snapshot create instead. Creating snapshots of unmanaged disks requires
programmatic methods.

Additional Reading: For more information, refer to: “Create a blob snapshot” at:
https://aka.ms/qnxnaz

Detaching a disk from an Azure VM


You can use the same methods to detach a disk from an Azure VM as you use to attach a disk, including
the Azure portal, Azure PowerShell, and Azure CLI.

To detach an Azure VM data disk by using the Azure portal, use the following steps:

1. In the Azure portal, navigate to the blade of the VM from which you will detach the disk, and then
click Disks.

2. On the Disks blade, click Edit.

3. Click an icon to the right of the disk that you intend to detach, and then click Save.

Note: You cannot detach the OS disk. To make the OS disk available (for example, to use it
when creating another Azure VM), you must first delete the Azure VM.
MCT USE ONLY. STUDENT USE PROHIBITED
4-12 Managing Azure VMs

To detach a disk by using Azure PowerShell, use the following commands:

Remove-AzureRmVMDataDisk –VM <VM object> -DataDiskNames <Disk name>


Update-AzureRmVM –ResourceGroupName <Resource group name> -Name <VM name> -VM <VM object>

To detach a disk by using Azure CLI, use the following command:

az vm disk detach –name <Disk name> --resource-group <Resource group name> --vm <VM name>

Modifying Azure VM disks


You can modify an existing Azure VM disk configuration by:

• Switching the host caching mode of the disk between None, Read-only, and Read/write.

• Increasing the size of the disk (up to the 2-terabyte [TB] limit for the OS disk and the 4-TB limit for a
data disk).
• Switching the storage account type between Standard local redundant storage (LRS), Standard geo-
redundant storage (GRS), or Standard read-access geo-redundant storage (RA-GRS) (unmanaged,
standard storage disks only).

Note: Managed disks support only the Standard_LRS storage account type.

• Switching the performance tier between Standard and Premium (managed disks only, if the VM size
supports Premium storage).
You can change the host caching mode from the Azure VM’s disks blade. You can change the size,
resiliency, and performance tier from the blades of individual disks. To modify disk settings by using Azure
PowerShell, run the Set-AzureRmVMDataDisk cmdlet, followed by the Update-AzureRmVM cmdlet. To
accomplish the same task by using Azure CLI, run az disk update.

Azure VM disk mobility

Cross-premises Azure VM disk


operations
When you create a VM based on an image, the
Azure platform automatically provisions a new OS
disk. Alternatively, you can attach an existing disk
containing an OS to a new Azure VM. This
typically happens when you migrate a VM from
your on-premises environment to Azure. Similarly,
you can attach either new (empty) or existing data
disks to any Azure VM, up to the limit determined
by its size.
When migrating on-premises disks and images to Azure, remember that on-premises Hyper-V virtual
hard disks can use either the .vhd or the .vhdx format. At the time of authoring this course, Azure does
not support the .vhdx format. Effectively, if you intend to upload an on-premises .vhdx file to Azure and
use it to provision a new Azure VM, you must first convert it to the .vhd format. You use the Edit Virtual
Hard Disk Wizard in the Hyper-V Manager console for this purpose.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 4-13

Other considerations when migrating .vhd files from your on-premises Hyper-V servers include:

• The 2-TB and 4-TB limits on the size of the OS and data disks respectively of Azure VMs. If your virtual
disks exceed this limit, try compressing them or splitting them into multiple disks (subsequently, you
can create a multidisk volume in an Azure VM to provide the matching drive size).

• Lack of support for dynamically expanding .vhd files in Azure. Effectively, you will need to make sure
that you convert any virtual disks to a fixed format before you upload them into an Azure Storage
account.

To upload a .vhd file into Azure, you can use the Add-AzureRmVHD Azure PowerShell cmdlets. This will
automatically store the file as a page blob in the target storage account that you specify (as part the
Destination parameter of the cmdlet). Conversely, you can use Save-AzureRmVHD to download .vhd files
from Azure Storage to your on-premises virtualization environment.

In addition to providing robust data transfer functionality, these cmdlets offer many advantages:

• Add-AzureRmVHD automatically converts dynamic disks to fixed format, eliminating the need to
perform this step prior to the transfer.

• Add-AzureRmVHD and Save-AzureRmVHD inspect the content of .vhd files and copy only their
used portion, minimizing the duration of data transfers.

• Add-AzureRmVHD supports uploads of differencing disks when the base image already resides in
Azure Storage. This minimizes the time and bandwidth it takes to upload an updated version of the
image.

• Both cmdlets support multithreading for increased throughput. To apply multithreading, use the
NumberOfUploaderThreads parameter.

• Both cmdlets allow you to access the target Azure Storage account by using a Shared Access
Signature (SAS) token instead of a storage account key. This allows you to restrict the permissions of
the person performing the upload or download to an individual storage account blob container or an
individual blob. You can also specify the time window during which the SAS token is valid.

Additional Reading: For details regarding SAS, refer to Module 6, “Planning and
implementing storage, backup, and recovery services.”

Note: You can accomplish the same outcome by using the az storage blob upload and az
storage blob download Azure CLI commands.

Once the file resides in Azure Storage, you can use the Azure portal, Azure PowerShell, or Azure CLI to
attach disks to a VM. For example, the Add-AzureRmVmDataDisk cmdlet supports attaching an existing
data disk to an Azure VM. Conversely, you can use Remove-AzureRmVmDataDisk cmdlets to detach an
existing data disk from an Azure VM.

Note: The equivalent Azure CLI commands are azure vm disk attach-new and azure vm
disk detach, respectively.

In addition to facilitating upload and download of .vhd files, Azure also offers the Import/Export service.
This service allows you to transfer physical disks between on-premises locations and Azure Storage
whenever the data volume makes it too expensive or unfeasible to rely on network connectivity.
MCT USE ONLY. STUDENT USE PROHIBITED
4-14 Managing Azure VMs

The process involves creating either import or export jobs, depending on the transfer direction:

• You create an import job to copy data from your on-premises infrastructure onto hard drives that you
subsequently ship to the Azure datacenter that is hosting the target storage account.

• You create an export job to request that data currently held in an Azure Storage account be copied to
hard drives that you ship to the Azure datacenter. When the drives arrive, the Azure datacenter
operations team completes the request and ships the drives back to you.

Note: For more information regarding the Import/Export service, refer to Module 6,
“Planning and Implementing Azure Storage” of this course.

Azure VMs disk copy and snapshot operations


The concept of disk mobility applies not only to cross-premises uploads and downloads, but also to
operations that involve creating copies and snapshots of .vhd files within Azure. You can use different
methods to copy Azure Storage blobs, including the Start-AzureStorageBlobCopy Azure PowerShell
cmdlet and its Azure CLI equivalent, az storage blob copy that can perform asynchronous copy of a blob
between two Azure Storage accounts. These tools facilitate copy operations of both managed and non-
managed disks and images.
To create a snapshot of a managed disk or an image, you can use the New-AzureRmSnapshot Azure
PowerShell cmdlet or its Azure CLI equivalent, az snapshot create. If you take a snapshot of an image,
you can use it to create a new image. Similarly, a snapshot of a disk allows you to create an exact replica
in the form of a managed snapshot.

You can copy managed snapshots across Azure regions and across Azure subscriptions. When the copy
completes, you can use it to create a managed image, allowing you to provision Azure VMs based on that
image within the target region and subscription. After you create an Azure VM by following this process,
you should delete the intermediate snapshots to avoid Azure Storage charges.

Additional Reading: Unmanaged disks support incremental snapshots via Snapshot Blob
REST API. This course does not cover Azure REST API. For more information, refer to: “Snapshot
Blob” at: https://aka.ms/f4pv4k

Additional Reading: You can perform many Azure Storage operations, including uploads,
downloads, copies, moves, and snapshots, by using the AzCopy tool available from
http://aka.ms/downloadazcopy. For more information, refer to: “Transfer data with the AzCopy
on Windows” at: https://aka.ms/osjyej
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 4-15

Managing disk volumes in Azure VMs


When you attach disks to an Azure VM, you can
manage them with the same tools and techniques
that you would use to manage disks on a physical
machine or on a VM deployed on an on-premises
Hyper-V server. On Windows VMs, the primary
method of multidisk management leverages the
Storage Spaces technology, which you can access
through Server Manager or Windows PowerShell.
On Linux VMs, for multidisk configurations, you
can use LVM or the mdadm tool.

By creating multidisk volumes, you can increase


per-volume throughput beyond the throughput
limit of an individual disk. Similarly, you can create a multidisk volume whose size will exceed the 4-TB
limit of an individual Azure VM data disk.

Creating multidisk volumes in Windows Azure VMs


Starting with Windows Server 2012, you can use the Storage Spaces functionality to create multidisk
volumes. This functionality offers several benefits:

• Improved performance, compared to individual disks or volumes configured by leveraging dynamic


disks (available in previous versions of Windows).

• Three-way mirroring, offering higher resiliency than two-way mirror or parity configurations.

Note: Azure VM disk files are inherently resilient because they take the form of page blobs
residing in Azure Storage accounts. Each Azure Storage account has at least three copies that
replicate synchronously in the same Azure region. You can rely on this built-in resiliency and
choose the simple layout when creating Storage Spaces–based volumes, rather than using
mirroring or parity options.

To create a storage space in an Azure Windows VM, follow these steps:


1. Create a new VM running Windows Server 2012 or later. When using lower-tier VMs, remember that
they support fewer data disks.

2. Attach new, empty disks to the VM.


3. Connect to the Windows OS running in the VM by using the RDP client.

4. Ensure that the File Server role service is installed.

5. Open Server Manager and navigate to File and Storage Services.


6. Click Storage Pools, and then click Tasks.

7. Click New Storage Pool, and then add the empty disks to the pool.

8. In File and Storage Services, select the pool, and then in the Virtual Disks pane, click New Virtual
Disk.

9. Set the disk layout and size, and then click Create.

10. The New Volume Wizard appears. Select the virtual disk that you created, choose a drive letter, and
then create the volume.
MCT USE ONLY. STUDENT USE PROHIBITED
4-16 Managing Azure VMs

Creating multidisk volumes in Linux Azure VMs


Linux Azure VMs support the use of LVM and the mdadm tool to create multidisk volumes. The process of
creating such volumes is identical to creating multidisk volumes in on-premises computers running the
matching Linux distributions.

Additional Reading:

• For more information, refer to: “Configure LVM on a Linux VM in Azure” at: https://aka.ms/s665fp

• For more information, refer to: “Configure Software RAID on Linux” at: https://aka.ms/xmcbo6

Demonstration: Configuring Azure VM disks


In this demonstration, you will see how to attach a new data disk to an Azure VM.

Check Your Knowledge


Question

You have an Azure VM running Windows Server 2016 with a single 4-TB data disk. You must create
a file system volume of 12 TB in size. What should you do?

Select the correct answer.

Attach two data disks. Create a Storage Spaces–based volume with the simple layout.

Increase the size of the data disk.

Attach one disk. Convert data disks to dynamic disks and create a stripe.

Attach two disks. Create a Storage Spaces–based volume with the parity layout.

Convert the disk to Premium storage and increase the size of the data disk.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 4-17

Lesson 3
Managing and monitoring Azure VMs
Azure offers several methods that simplify and enhance management of both Windows and Linux Azure
VMs. In this lesson, you will learn about these methods.

Lesson Objectives
After completing this lesson, you will be able to:

• Explain the role of the VM Agent and VM extensions.

• Describe the VM Agent Custom Script extension.

• Describe the VM Agent Desired State Configuration (DSC) extension.


• Explain how to monitor Azure VMs.

• Configure an Azure VM running Linux by using the Custom Script extension.

Overview of VM Agent and VM extensions


When deploying Azure VMs, you implement
platform-specific configurations (such as Azure
Storage disks or virtual network settings). In
addition, you can configure the OS and workloads
running in the VM by using a software component
called the Azure VM Agent. While the VM Agent
includes its own useful features, one of its most
significant benefits is its support for loading
software components called VM extensions. VM
extensions implement additional functionality,
typically in the areas of management, monitoring,
or security.

Note: On Windows Azure VMs, the VM Agent is highly recommended but optional. On
Linux Azure VMs, the VM Agent is mandatory.

Additional Reading: For more information, refer to: “Understanding and using the Azure
Linux Agent” at: https://aka.ms/betzjc

VM images available from the Azure Marketplace include the VM Agent by default. When creating
custom images, you should install the agent manually, before generalizing the OS. The Windows VM
Agent is available from https://aka.ms/kz1pcf as a Windows Installer package. Linux versions of the VM
Agent are available for download from GitHub at https://aka.ms/hmnq48. Regardless of the Azure VM’s
operating system, after the installation completes, you must set the ProvisionGuestAgent property of
the Azure VM by using the Update-AzureVM Azure PowerShell cmdlet or az vm update Azure CLI
command.
MCT USE ONLY. STUDENT USE PROHIBITED
4-18 Managing Azure VMs

After you install the agent, you can proceed with adding VM extensions. Some of the more commonly
used VM extensions include:

• Azure VM Access extension. On Windows VMs, this extension enables you to reset local administrative
credentials and fix misconfigured RDP settings. On Linux VMs, it enables you to reset the admin
password or SSH key, fix misconfigured SSH settings, create a new sudo user account, and check disk
consistency.

• Chef Client and Puppet Enterprise Agent. These extensions integrate Windows and Linux VMs with
cross-platform Chef and Puppet (respectively) enterprise management solutions.

• Custom Script extension for Windows. This extension enables you to run custom Windows PowerShell
scripts within Azure VMs. The most common use of the Custom Script extension involves applying
custom configuration settings during VM provisioning. However, you can also use it to perform any
scriptable action after the initial deployment. Scripts can reside in Azure Storage or any internet-
accessible location, such as GitHub. If you are deploying a Windows VM from the Azure portal, you
can also provide the script at the deployment time.

• Custom Script extension for Linux. This extension is equivalent to its Windows counterpart, enabling
you to run custom scripts within Linux Azure VMs. The extension supports any scripting language that
the OS supports, such as Python or Bash. Scripts can reside in Azure Storage or any internet-
accessible location, such as GitHub.

• DSC extension for Windows. This extension implements a PowerShell–based configuration of


Windows components and applications, including the ability to modify settings such as files, folders,
registries, services, or an OS feature.

• DSC extension for Linux. This extension implements a PowerShell-based configuration of Linux
components, equivalent to what PowerShell DSC provides for Windows.
• Azure Diagnostics extension. This extension enables Azure VM diagnostics that collect data from the
OS and its components on Windows and Linux VMs. The extension copies data to Azure standard
storage, allowing for long-term storage and further analysis by using business intelligence tools.
• Docker extension. This extension facilitates automatic installation of Docker components, including
the Docker daemon, Docker client, and Docker Compose on Linux VMs. This simplifies the process of
implementing and managing containerized workloads.

• Microsoft Antimalware extension. This extension helps protect against viruses, spyware, and malware
on Windows VMs in real time.

Additional Reading: For more information, refer to: “Virtual machine extensions and
features for Windows” at: http://aka.ms/B8t3pl and “Virtual machine extensions and features for
Linux” at: https://aka.ms/jpzwcw
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 4-19

What is the VM Agent Custom Script extension?


The Custom Script extension for Azure VMs
enables you to invoke local execution of scripts
within a Windows or Linux Azure VM. On the
Windows OS, you can implement scripts by using
Windows PowerShell. The extension for the Linux
OS allows you to run code written in any scripting
language that the OS supports, such as Python or
Bash.

The most common use of the Custom Script


extension involves applying custom configuration
settings during VM provisioning. However, you
can also use it to perform any scriptable action
after the initial deployment. The script can reside in any internet-accessible location, including Azure
Storage. You can also upload it directly from the Azure portal when installing the extension on an Azure
VM.

Implementing the Custom Script extension via scripting


To implement the functionality described in this topic, you must install the Custom Script extension in the
OS hosted on an Azure VM that you intend to manage. Then you must assign the script that you want the
extension to execute. You can accomplish this either during VM provisioning or afterward, by running the
Set-AzureRmVMCustomScriptExtension Windows PowerShell cmdlet:

Set-AzureRmVMCustomScriptExtension -ResourceGroupName <Resource Group name> -Location


<Azure region> -VMName <VM name> -Name <Custom Script extension name> -TypeHandlerVersion
"2.0" -StorageAccountName <Storage account name> -StorageAccountKey <Storage account key>
-FileName <PowerShell script name> -ContainerName <Storage account container name> -Run
<command to execute>

The cmdlet references the fully qualified location of the script file by using a combination of the
-StorageAccountName, -ContainerName, and -FileName parameters. If the container of the storage
account does not permit anonymous access, you must provide the value of the storage account key
(-StorageAccountKey). To specify the command and the parameters of the script, respectively, use the
-Run and -Argument parameters (the value of -Run would typically match the value of -FileName).
TypeHandlerVersion represents the version of the extension to use (which you can determine by
running the Get-AzureRmVMExtensionImage cmdlet with the value of Microsoft.Compute as the
-PublisherName parameter and the value of VMAccessAgent as the -Type parameter).
-ResourceGroupName, -Location, and -VMName uniquely identify the target Azure VM.
Alternatively, you can use the Set-AzureRMVMExtension Azure PowerShell cmdlet and specify
CustomScriptExtension as the value of its -ExtensionType parameter. You can use hash tables to assign
values to its -Settings and -ProtectedSettings parameters, as shown below:

$settings = @{“fileUris” = “[]”; “commandToExecute” = “”};


$protectedSettings = @{“storageAccountName = <Storage account name>; “storageAccountKey” =
<Storage account key>};
Set-AzureRmVMExtension -ResourceGroupName <Resource Group name> -Location <Azure region> -
VMName <VM name> -Name <Custom Script extension name> –Publisher “Microsoft.Compute” –
ExtensionType “CustomScriptExtension” -TypeHandlerVersion "2.0" –Settings $settings –
ProtectedSettings $protectedSettings
MCT USE ONLY. STUDENT USE PROHIBITED
4-20 Managing Azure VMs

This cmdlet closely resembles Set-AzureRmVMCustomScriptExtension. For example, it also uniquely


identifies the target Azure VM by using the combination of -ResourceGroupName, -Location, and
-VMName. On the other hand, it relies on the two hash tables to point to the location and execution
settings of the custom script. Due to its more generic purpose, it also includes direct references to the
extension that you intend to apply, in the form of the -Publisher, -ExtensionType, and -TypeHandlerVersion
parameters.

Note: When applying scripts to Azure VMs running the Linux OS, you would set the
-Publisher parameter to Microsoft.OSTCExtension and the -ExtensionType parameter to
CustomScriptForLinux.

Additional Reading: To accomplish the same objective by using Azure CLI, run the az vm
extension set command. For more information, refer to: “Using the Azure Custom Script
Extension with Linux Virtual Machines” at: https://aka.ms/stuho4

Implementing Custom Script extension via Resource Manager templates


You can also use Resource Manager templates to implement the Custom Script extension. The imperative
method (based on the Set-AzureRmExtension cmdlet) and declarative method (based on the Azure
Resource Manager template presented here) are interchangeable. They support the same set of
parameters and allow you to deploy the same types of scripts. As an example, the following template
demonstrates how to apply a script named script1.ps1 to an Azure VM that is running Windows and is
identified by the vmName and location parameters:

{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "MyCustomScriptExtension",
"apiVersion": "2015-05-01-preview",
"location": "[parameters('location')]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/',parameters('vmName'))]"
],
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "2.0",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": [
"http://storageaccountname.blob.core.windows.net/customscriptfiles/script.ps1"
],
"commandToExecute": "powershell.exe -ExecutionPolicy Unrestricted -File script.ps1"
}
}
}

Note: When deploying Custom Script extension–based scripts via templates to Azure VMs
running the Linux OS, you set the publisher key to Microsoft.Azure.Extensions and the type
key to CustomScript.

Additional Reading: For more information, refer to: “Custom Script Extension for
Windows” at: https://aka.ms/rsj60b and “Using the Azure Custom Script Extension with Linux
Virtual Machines” at: https://aka.ms/nvkum3
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 4-21

What is the VM Agent DSC extension?


PowerShell DSC is a technology introduced in
Windows Management Framework 4.0 that
implements declarative configuration
management. Initially, it was available exclusively
on computers running the Windows operating
system. In its current version, it allows you to
apply script-based configuration to Windows and
Linux OSs, both on-premises and in the cloud. To
implement DSC in Azure VMs, you rely on the VM
Agent DSC extension. You can apply it to Azure
VMs by using the Azure portal, Azure PowerShell,
Azure CLI, and Azure Resource Manager
templates.

Windows-based DSC relies on the Local Configuration Manager (LCM) component, which serves as the
execution engine of the Windows PowerShell DSC scripts. LCM is responsible for coordinating
implementation of DSC settings and monitoring their ongoing status. Like DSC, LCM is an integral part of
Windows Server 2012 R2 and Windows Server 2016. It is also available for Windows Server 2008 R2 as part
of the Windows Management Framework download. The DSC LCM ConfigurationMode property takes
on one of three possible values, which determines how LCM handles Windows PowerShell DSC scripts:

• ApplyOnly. LCM executes the script only once.


• ApplyAndMonitor. LCM executes the script only once, but then monitors the resulting configuration
and records any configuration drift in logs.

• ApplyAndAutoCorrect. LCM executes the script in regular intervals, automatically correcting any
configuration drift.

DSC relies on software components known as DSC resources to handle resource-specific implementation
details. In this context, the term resource means any configurable software component, such as a file,
folder, registry, service, or an OS feature. DSC includes a number of built-in resources, but it is extensible,
making its management scope virtually unlimited.

You can deploy DSC configuration in one of two modes: push mode and pull mode. The push mode
involves invoking deployment from a management computer against one or more managed computers.
In the pull mode, managed computers act independently by obtaining configuration data from a
designated location (referred to as a Pull Server). This topic covers the push mode. You will find more
information regarding the pull mode in Module 11, “Implementing Azure-based management and
automation.”

Note: The Linux operating system relies on the Open Management Infrastructure (OMI)
Common Information Model (CIM) server to provide equivalent functionality. Installation of the
OMI CIM server occurs automatically when you deploy the Azure VM DSC extension to Azure
VMs running Linux.

Creating Windows PowerShell DSC configuration scripts


DSC scripts utilize syntax that is enclosed in the configuration construct. Windows PowerShell 4.0
(included in Windows Management Framework 4.0) introduced this syntax to define the intended OS
configuration.
MCT USE ONLY. STUDENT USE PROHIBITED
4-22 Managing Azure VMs

Note: In general, you must convert Windows PowerShell DSC scripts into Management
Object Format (MOF) node configuration files by using Windows PowerShell cmdlets to compile
them. However, Azure PowerShell handles the compilation automatically when deploying DSC
extensions to Azure VMs running the Windows OS.

For example, the following .ps1 file instructs the LCM running on the local computer to install the Internet
Information Services (IIS) server role and the ASP.NET 4.5 feature, and to disable the default website. Note
that a custom DCS resource facilitates disabling the default website. You import this resource by adding
the Import-DscResource cmdlet. In addition, as the presence of the DependsOn element demonstrates,
you can control the sequence in which tasks are executed by defining dependencies between them:

configuration IISConfig
{
Import-DscResource –Module xWebAdministration
node ("localhost") {
WindowsFeature IIS {
Ensure = "Present"
Name = "Web-Server"
}

WindowsFeature AspNet45 {
Ensure = "Present"
Name = "Web-Asp-Net45"
}

xWebsite DefaultSite {
Ensure = "Present"
Name = "Default Web Site"
State = "Stopped"
PhysicalPath = “C:\inetpub\wwwroot"
DependsOn = "[WindowsFeature]IIS"
}
}
}

Implementing DSC in Azure VMs


Applying DSC to an Azure VM running Windows involves a sequence of steps. Here is an example of an
implementation procedure that relies on an Azure Storage account to host a configuration script:

1. Sign in to your Azure subscription by using the Add-AzureRmAccount cmdlet. If you have multiple
subscriptions associated with the same account, ensure that you select the target one by using the
Select-AzureRmSubscription cmdlet:

Add-AzureRmAccount

2. Publish the Azure DSC configuration to an Azure Storage account by running the Publish-
AzureRmVMDscConfiguration cmdlet. The configuration (the -ConfigurationPath parameter) takes
the form of a Windows PowerShell script (a .ps1 file, like the one in the previous section), a Windows
PowerShell module (a .psm1 file), or an archive containing a combination of scripts, modules, and
resources (a .zip file). The -ResourceGroupName, -StorageAccountName, and -ContainerName
parameters designate the storage account blob container where the configuration will reside:

$moduleURL = Publish-AzureRmVMDscConfiguration -ConfigurationPath <File system path


of the configuration script> -ResourceGroupName <Name of resource group hosting the
storage account> -StorageAccountName <Storage account name> –ContainerName <Blob
container name>
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 4-23

The publishing process will first generate a .zip file containing all scripts, modules, and resources that
the configuration references, and then upload this archive into the Azure Storage location that you
specified.

3. Create a Shared Access Signature (SAS) token that will provide access to the archive configuration file
residing in the Azure Storage account. An SAS is a digitally signed string that identifies an Azure
Storage object and determines access permissions to that object. In this case, Read permissions will
suffice. To create an SAS token, you must first establish the security context for access to the target
Azure Storage account. To do so, you must provide the storage account name and storage account
key (which you can retrieve from the Azure portal or by using Azure PowerShell):

$storageContext = New-AzureStorageContext –StorageAccountName <Storage account name>


-StorageAccountKey <Storage account key>
$sasToken = New-AzureStorageContainerSASToken –Name <Blob container name> –Context
$storageContext –Permission r

Note: You will learn about SAS and other Azure Storage–related topics in more detail in
Module 6, “Planning and implementing storage, backup, and recovery services.”

4. Create a variable that takes the form of a hash table or a string, and contains settings identifying the
location of the DSC archive, DSC configuration function, and the newly generated SAS token:

$settingsHashTable = @{
"ModulesUrl" = "$moduleURL";
"ConfigurationFunction" = <Name of DSC configuration file>\<Name of DSC
configuration>";
"SasToken" = "$sasToken"
}

5. Enable and configure the Azure VM Agent DSC extension by running the Set-AzureRmVMExtension
cmdlet. The -ResourceGroupName, -VMName, and -Location parameters identify the target Azure
VM. The -Name, -Publisher, -ExtensionType, and -TypeHandlerVersion parameters designate the
intended VM Agent extension:

Set-AzureRmVMExtension -ResourceGroupName <Resource group name> -VMName `


<VM name> -Location <Azure region> -Name ‘DSC’ -Publisher ‘Microsoft.PowerShell’ `
-ExtensionType ‘DSC’ -TypeHandlerVersion ‘2.9’ -Settings $settingsHashTable

Alternatively, as with the Custom Script extension, you can use the extension-specific Azure
PowerShell cmdlet Set-AzureRmVMDscExtension.

Additional Reading: For more information, refer to: “Azure.Service” at:


http://aka.ms/Cyyypz

Note: As with Custom Script extension scripts, you can reference DSC configuration files
residing in any internet-accessible location, including Azure Storage.

Additional Reading: To accomplish the same objective by using Azure CLI, run the az vm
extension set command. For more information, refer to: https://aka.ms/i3t1rj
MCT USE ONLY. STUDENT USE PROHIBITED
4-24 Managing Azure VMs

You can also deploy the DSC configuration by using the Azure Resource Manager templates.

Additional Reading: For more information, refer to: “Desired State Configuration
extension with Azure Resource Manager templates” at: https://aka.ms/od8t5e

Implementing the Azure DSCForLinux extension


The VM Agent DSC extension extends DSC functionality to Azure VMs running the Windows OS. The
Azure DSCForLinux extension allows you to implement DSC on Azure VMs running the Linux OS. The
extension delivers its capabilities based on Open Management Infrastructure open source software
packages.

Despite obvious differences resulting from distinct OS platforms, DSC on Linux resembles DSC on
Windows in terms of architecture and procedure. They both rely on DSC resources to handle resource-
specific implementation details. They both follow the same syntax of a configuration document describing
the target state of a managed computer. To push configurations to computers running the Linux OS, you
can use the same Windows PowerShell cmdlets or Azure CLI commands.
To implement the DSCForLinux extension, create a configuration document file, compile it, and then copy
the resulting MOF file to an internet-accessible location, such as Azure Storage or GitHub. Next, use Azure
PowerShell or Azure CLI to deploy the configuration to target Azure VMs. This step is similar to how you
use the VM Agent DSC extension for Windows. Note that you will need to adjust the -Publisher,
-ExtensionType, and -TypeHandlerVersion parameters accordingly.
Here is an example of an implementation procedure that relies on an Azure Storage account to host a
configuration script:

1. Sign in to your Azure subscription by running the Login-AzureRmAccount cmdlet. If you have
multiple subscriptions associated with the same account, make sure to select the target subscription
by using the Set-AzureRmSubscription cmdlet:

Login-AzureRmAccount

2. Copy the compiled configuration file to an Azure Storage account container. For this purpose, you
can use Azure PowerShell, Azure CLI, or any Azure Storage tools.

3. Take note of the storage account name and its key. You can obtain this information from the Azure
portal or by using Azure PowerShell. You will need it to retrieve the configuration file when
implementing the DSC configuration.

4. Create variables that will contain values necessary for configuring the Azure DSCForLinux extension.
As with the Azure VM Agent DSC extension for Windows, these variables include two hash tables,
which you can implement as strings, as illustrated in the following example. You will assign them to
the -SettingString and -ProtectedSettingString (or -Settings and -ProtectedSettings if you opt to use
hash tables) parameters of the Set-AzureRmVMExtension cmdlet. $protectedSettingString stores
information that facilitates access to the MOF configuration file residing in the Azure Storage account.
$SettingString specifies the deployment mode (push mode, in this case) and the file location:

$privateConfig = '{
"StorageAccountName": "<Storage account name>",
"StorageAccountKey": "<Storage account key>"
}'
$publicConfig = '{
"ExtensionAction": "Push"
"FileUri": "<mof-file-uri>"
}'
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 4-25

5. Deploy the configuration by running the Set-AzureRmVMExtension cmdlet. The


-ResourceGroupName, -VMName, and -Location parameters identify the target Azure VM. The
-Name, -Publisher, -ExtensionType, and -TypeHandlerVersion parameters designate the intended VM
Agent extension:

Set-AzureRmVMExtension -ResourceGroupName <Resource group name> -VMName <VM name>`


-Location <Azure region> -Name ‘DSCForLinux’ -Publisher 'Microsoft.OSTCExtensions'`
-ExtensionType ‘DSCForLinux’ -TypeHandlerVersion ‘2.6’ -SettingString $publicConfig`
-ProtectedSettingString $privateConfig

Additional Reading: For more information on implementing DSC configurations on Linux,


including template-based deployments, refer to: “DSCForLinux Extension” at:
https://aka.ms/v8aj7t

Monitoring Azure VMs


Like most Azure services, Azure VMs enable you to
track their performance, availability, and usage.
Some of this data is available directly from the
Azure portal. You can also collect Azure VM
metrics and diagnostics via Azure PowerShell and
Azure CLI scripts. In addition, you can collect this
data programmatically via the REST API and Azure
SDKs.

Collecting metrics and diagnostics of


Azure VMs
Azure VM monitoring data can originate from
several sources, including the hypervisor, guest
OS, and workloads running within the guest OS. The default metrics that you can view directly in the
Azure portal represent data available to the Hyper-V hosts where target VMs run. These metrics include
the following:

• Percentage CPU

• Network In

• Network Out

• Disk Read Bytes

• Disk Write Bytes

• Disk Read Operations/Sec


• Disk Write Operations/Sec

• CPU Credits Consumed (applicable to Azure B-series burstable VM sizes)

• CPU Credits Remaining (applicable to Azure B-series burstable VM sizes)


MCT USE ONLY. STUDENT USE PROHIBITED
4-26 Managing Azure VMs

You can gain more insight into performance and the state of an Azure VM’s OS by enabling diagnostics,
which implement guest-level monitoring. On Azure VMs running Windows, this allows you to collect data
representing:

• Basic metrics (CPU, memory, disk, network, ASP.NET and Microsoft SQL Server)

• Performance counters

• Event logs (Application, Security, System)

• IIS logs and failed request logs

• Tracing output generated by Microsoft .NET applications

• Event tracing for Windows (ETW) events

• Crash dumps

• Application Insights data

• Boot diagnostics (recording output displayed on the VM console and providing its screenshots)

On Azure VMs running Linux, the selection is considerably more limited and includes basic metrics and
boot diagnostics.
To enable diagnostics, you must designate an Azure standard storage account where the collected data
will reside. Consequently, there is a cost associated with enabling diagnostics, directly proportional to the
volume of data that you decide to collect.

You can customize the way the Azure portal displays an Azure VM’s metrics from its Metrics blade.
Similarly, to enable and configure collection of an Azure VM’s diagnostics, navigate to its Diagnostics
settings blade. The diagnostics functionality relies on the VM Agent Diagnostics extension, available for
Windows (IaaSDiagnostics) and Linux (LinuxDiagnostics). Enabling diagnostics automatically installs the
extension.

To view and analyze diagnostics and logs, you can allow a range of tools and services to access tables and
blobs in the Azure Storage account that is hosting collected data. The primary Azure offering that handles
data collection and analysis is Azure Log Analytics. You can also export data into Microsoft Excel or any
business intelligence application (such as Microsoft Power BI) for further analysis. In addition, you can
stream diagnostics data by using Event Hubs, which can then pipe it to non-Microsoft logging and
telemetry systems, including custom security information, event management, and analytics solutions.

Note: You will learn about Log Analytics in Module 11, “Implementing Azure-based
Management, Monitoring and Automation.”

Additional Reading: For more information, refer to: “Streaming Azure Diagnostics data in
the hot path by using Event Hubs” at: https://aka.ms/x6q8fd

Note: For a comprehensive view of metrics across multiple resources in your subscription,
use Azure Metrics. You will learn about Azure Metrics in Module 11, “Implementing Azure-based
Management, Monitoring and Automation.”
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 4-27

Alerts
Alert rules allow you to trigger notifications according to metrics-based criteria that you specify. Each rule
includes a metric, condition, threshold, and time period that collectively determine when to raise an alert.
You can send an email containing the alert notification to any email address. It is also possible to route
alerts to an arbitrary HTTP or HTTPS endpoint (referred to as a webhook). In addition, you can invoke
execution of an Azure Automation runbook or start an Azure Logic App in response to an alert.

Demonstration: Configuring an Azure VM running Linux by using the


Custom Script extension
In this demonstration, you will see how to apply DSC to an Azure VM running the Linux OS.

Check Your Knowledge


Question

You plan to deploy an Azure virtual machine based on an image of your on-premises Windows
Server 2016 VM that you uploaded to Azure. You must ensure that that you can install the DSC
extension on the new Azure VM. What activity should you include in the new Azure virtual
machine’s deployment process?

Select the correct answer.

Provisioning the Azure VM Agent.

Installing Windows Management Framework.

Installing the Azure PowerShell module.

Running sysprep with the specialize option.

Running sysprep with the mode:vm option.


MCT USE ONLY. STUDENT USE PROHIBITED
4-28 Managing Azure VMs

Lab: Managing Azure VMs


Scenario
Now that you have tested basic deployment options of Azure VMs, you need to start testing more
advanced configuration scenarios. Your plan is to step through a sample implementation a two-tier
Adatum ResDev application. As part of your tests, you will install IIS by using the VM DSC extension on the
front-end tier. You will also set up a multi-disk volume by using Storage Spaces in a Windows Azure VM in
the back-end tier.

Objectives
After completing this lab, you will be able to:

• Implement desired state configuration of Azure VMs.

• Implement Storage Space–based simple volumes in Azure VMs.

Note: The lab steps for this course change frequently due to updates to Azure. Microsoft
Learning updates the lab steps frequently, so they are not available in this manual. Your
instructor will provide you with the lab documentation.

Estimated Time: 60 minutes

Virtual machine: 20533E-MIA-CL1

User name: Student


Password: Pa55w.rd

Question: Why would you use Storage Spaces in an Azure VM considering that Azure
already provides highly available storage built into a storage account?
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 4-29

Module Review and Takeaways


Review Question

Question: Can you use an OS disk of a VM from an on-premises Hyper-V host to deploy an
Azure VM?
MCT USE ONLY. STUDENT USE PROHIBITED
MCT USE ONLY. STUDENT USE PROHIBITED
5-1

Module 5
Implementing Azure App Service
Contents:
Module Overview 5-1
Lesson 1: Introduction to App Service 5-2

Lesson 2: Planning app deployment in App Service 5-12

Lesson 3: Implementing and maintaining web apps 5-17


Lesson 4: Configuring web apps 5-25

Lesson 5: Monitoring web apps and WebJobs 5-33

Lesson 6: Implementing Traffic Manager 5-38

Lab: Implementing web apps 5-43

Module Review and Takeaways 5-45

Module Overview
You can use Microsoft Azure virtual machines (VMs) for many purposes, including hosting web apps by
using Microsoft Internet Information Services (IIS) or Apache. However, you can also use the specialized
Microsoft Azure App Service to host web apps, mobile apps, logic apps, and application programming
interface (API) apps without deploying a dedicated Azure VM and installing the associated platform
software. With App Service, you can develop and deploy your own web app code or leverage platforms,
such as WordPress, Drupal, or Umbraco, that include ready-to-use web applications. In this module, you
will learn how to implement and manage highly scalable app services.

Objectives
After completing this module, you will be able to:

• Explain the different types of apps that you can create by using App Service.

• Select an App Service plan and a deployment method for apps in Azure.

• Use Visual Studio, File Transfer Protocol (FTP) clients, and Azure PowerShell to deploy web and mobile
apps to Azure.

• Configure web apps and use the Azure WebJobs feature to schedule tasks.

• Monitor the performance of web apps.

• Use Azure Traffic Manager to distribute requests between two or more app services.
MCT USE ONLY. STUDENT USE PROHIBITED
5-2 Implementing Azure App Service

Lesson 1
Introduction to App Service
Organizations are facing increasing demands to deliver great web-based apps that engage and connect
with customers. These apps must work on any device and should consume and integrate with data from
anywhere. App Service provides a powerful platform that enables companies to build web-based apps
that can work on any device. These apps can integrate easily with other software as a service (SaaS) apps,
such as Microsoft Office 365, Microsoft OneDrive for Business, and Facebook. They can also connect with
enterprise on-premises apps, such as SAP, Oracle, and others. In this lesson, you will learn about the types
of apps that you can implement by using App Service and about their capabilities.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe the components of App Service.


• Describe the Azure Web Apps feature of App Service.

• Describe the Azure Mobile Apps feature of App Service.

• Describe the Azure Logic Apps feature of App Service.


• Describe the Azure API Apps feature of App Service.

• Describe the functionalities of the App Service Environment.

Demonstration: Preparing the lab environment


Perform the tasks in this demonstration to prepare the lab environment. The environment will be
configured while you progress through this module, learning about the Azure services that you will use in
the lab.

Important: The scripts used in this course might delete objects that you have in their
subscriptions. Therefore, you should complete this course by using new Azure subscriptions. You
should also use a new Microsoft account that is not associated with any other Azure subscription.
This will eliminate the possibility of any potential confusion when running setup scripts.

This course relies on custom Azure PowerShell modules including Add-20533EEnvironment to prepare
the lab environment for labs and Remove-20533EEnvironment to perform clean-up tasks at the end of
the module.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 5-3

Overview of App Service


App Service provides a comprehensive platform
for building web-based applications that users
can consume on any device. App Service
provides a hosted service that developers can use
to build web and mobile apps. Additionally,
developers can use this service to develop API
apps and logic apps, which provide integration
with SaaS apps. App Service replaced several
separate Azure services, including Azure
Websites, Azure Mobile Services, and Azure
BizTalk Services, with a single integrated service
that offers a wider range of features than its
predecessors.

App Service provides a hosting platform for developing, building, and running the following app types:

• Web apps: websites and web apps.


• Mobile apps: backend services of mobile apps.

• API apps: RESTful APIs.

• Logic apps: automated workflows that integrate SaaS apps.

Overview of Web Apps


The Web Apps feature is a managed service that
facilitates running custom web apps in Azure
without having to explicitly deploy, configure,
and maintain Azure VMs. You can build web apps
by using the ASP.NET, ASP.NET Core, PHP,
Node.js, Java, and Python frameworks. Web apps
integrate with common development
environments such as Visual Studio Team
Services (VSTS), Visual Studio, and GitHub.

Traditionally, Azure Web Apps ran on Windows


VMs by using the Web Server role IIS. In
September 2017, Microsoft introduced the
option of deploying web apps as Linux containers. This offering includes two main features:

• App Service on Linux. This feature allows customers to deploy ready-to-use Linux containers.

• Web App for Containers. This feature allows customers to deploy their own Docker containers from
registries such as Docker Hub, Azure Container Registry, or a private registry.

Note: With Azure Web App on Linux, you can build apps by using .NET Core, Node.js, PHP,
and Ruby. Java is in preview at the time of authoring of this content.
MCT USE ONLY. STUDENT USE PROHIBITED
5-4 Implementing Azure App Service

Key features of Web Apps include:

• Gallery applications. You can use Azure Marketplace to deploy components of your custom solution,
such as blogging sites, frameworks, and ASP.NET starter apps. You can browse through the available
choices by navigating to https://aka.ms/bbebhj

• Autoscaling. You can implement multiple instances of a web app to increase capacity and resilience.
Azure web app deployments automatically include an Azure Load Balancer, which distributes
incoming requests among individual web app instances. You also can configure the autoscaling
functionality to dynamically accommodate changes in web app demand.
• Continuous integration and deployment. You can deploy the web app code from cloud source-
control systems (such as Microsoft Visual Studio Team Services and GitHub), on-premises source-
control systems (such as Team Foundation Server [TFS] and Git), and from on-premises deployment
tools (such as Visual Studio, FTP clients, WebMatrix, and MSBuild). You also can use continuous
integration tools, such as Bitbucket, Hudson, or HP TeamSite, to automate build, test, and integration
tests.

• Deployment slots. If you are using the Standard, Premium, Premium V2, or Isolated App Service plan,
you can create multiple staging environments, also referred to as deployment slots or staging slots, for
each web app. For example, you can create one slot for your production web app, and then deploy
your tested and accepted code there. You then can create a second slot that is your staging
environment and deploy the new code to it to run acceptance tests. Each staging environment will
have a different URL.

• Testing in production. When a new version of your staging-slot web app passes all the tests, you can
redirect a percentage of the traffic targeting the production site to it. This allows you to perform final
validation of the functionality included in the new version of the web app.
• Azure WebJobs. WebJobs run background processes for web apps, allowing you to offload most of
the time-consuming and processor–intensive tasks from the web apps and run them outside of times
of heavy web app usage. You can perform variety of common maintenance tasks, such as updating a
database or moving log files by using WebJobs.

• Hybrid connections. You can implement hybrid connections from web in Azure to access on-premises
resources, such as instances of Microsoft SQL Server. This functionality involves deployment of a
lightweight software component named Hybrid Connection Manager to your on-premises servers.
Hybrid Connection Manager initiates connection to Azure, which eliminates the need to open your
perimeter network to inbound traffic. You can use a single instance of Hybrid Connection Manager to
share connection across multiple web apps.

• Azure virtual network integration. When using the Standard, Premium, or Premium V2 App Service
plan, you have the option of connecting your web apps to an Azure virtual network. This allows you,
for example, to persist content of your web app in a database residing in an Azure VM, without
having to expose that virtual machine to the internet. However, this capability relies on a point-to-site
virtual private network (VPN) connection between an App Service instance and an Azure virtual
network, so it is subject to performance and bandwidth limitations associated with the point-to-site
VPN. You can avoid these limitations by implementing the App Service Environment to deploy your
web app directly into an Azure virtual network.
• Authentication and authorization. App Service can use cloud identity providers to authenticate and
authorize user access. App Service was offering built-in support for five identity providers: Azure
Active Directory (Azure AD), Facebook, Google, Microsoft accounts, and Twitter.

• Logs and alerts. App Service allows you to configure alerts and collect logs for monitoring and
troubleshooting purposes. Integration of web apps with New Relic provides deep insight into the
performance and reliability of mobile apps.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 5-5

Note: Majority of key Web Apps features described in this topic are also available when
using the other types of App Service apps. These features include autoscaling, continuous
integration, delivery, and deployment, WebJobs, hybrid connections, Azure virtual network
integration, and support for authentication and authorization.

Web apps often rely on other types of services that provide data storage and file storage. Data that the
server-side code formats into webpages and sends to users often resides in a database. In Azure, you can
use for this purpose such PaaS services as Azure SQL Database, Cosmos DB, or Azure table storage.
Alternatively, you can take an IaaS based approach and host a database on an Azure VM. Web apps also
often include media content, such as images, videos, and sound files. Typically, these files do not reside in
a database. Instead, to store them, you can use Azure blob storage. To improve performance of data
access, you can implement Azure Content Delivery Network and Azure Redis Cache.

Overview of Mobile Apps


The Mobile Apps feature is a part of App Service.
This feature provides a platform for building and
hosting backend services for mobile applications.
Mobile Apps help developers address the
challenging requirements for modern mobile-
device apps, including:

• Storing and accessing data.


• Sending notifications in response to custom-
defined events.
• Authenticating and authorizing users based
on Facebook, Twitter, Microsoft accounts, or
other identities.

• Implementing business logic.


The Mobile Apps feature allows developers to build cross-platform apps that can run on Windows, iOS, or
Android. The backend of these apps can operate exclusively in the cloud or connect with your on-
premises infrastructure. Mobile Apps can also benefit from the built-in push notification engine that sends
personalized push notifications to mobile devices.

Similar to Web Apps, the Mobile Apps feature supports autoscaling, continuous integration, delivery, and
deployment, WebJobs, hybrid connections, Azure virtual network integration, integrated authentication,
and authorization, including alerts and logs.
MCT USE ONLY. STUDENT USE PROHIBITED
5-6 Implementing Azure App Service

Overview of Logic Apps


With the Logic Apps feature, you can automate
business processes by linking cloud-based apps,
such as Office 365 or Salesforce. You can create a
logic app directly in the Azure portal by using a
visual designer to build a workflow that
integrates SaaS apps via connectors available
from the Azure Marketplace. Each step in the
workflow is an action that accesses data or
services through a connector. More advanced
integration scenarios support the use of rules,
transformations, validations, and features that are
part of BizTalk Services.

When you develop logic apps, you can use different types of connectors, which belong to one of the
following categories:
• Built-in actions. Implemented as part of the Logic Apps engine. They allow for communication with
HTTP endpoints, such as Azure Functions or Azure API apps.

• Managed connectors. Referenced as connectors in the Visual Designer interface. They facilitate access
to external APIs. Logic Apps managed connectors consist of:

o Standard connectors. These connectors are included by default in the designer and provide
access to such services as Service Bus, Office 365, OneDrive, Yammer, Facebook, or Microsoft
Power BI.
o On-premises connectors. These connectors require the deployment of an on-premises data
gateway, which provides access to on-premises services and applications, such as SQL Server,
Microsoft SharePoint, or data residing on file shares.

Additional Reading: For a more information, refer to: “Install the on-premises data
gateway for Azure Logic Apps” at: https://aka.ms/Va1fwc

o Integration account connectors. These connectors require the purchase of an integration


account. This account provides more advanced functionality, such as transformation and
validation of XML-formatted data or data in formats common in business-to-business messaging
scenarios, such as Electronic Data Interchange.

o Enterprise connectors. These connectors are available as optional components for an additional
cost. They facilitate connectivity to enterprise systems such as IBM MQ or SAP ERP (enterprise
resource planning).

Additional Reading: For more information, refer to “Connectors list” at:


http://aka.ms/Bcinbr

To invoke a logic app, you define a trigger. Logic Apps support poll triggers, push triggers, and recurrence
triggers. A common poll-trigger scenario involves using an Azure Storage account or an Azure SQL
database. You use the poll trigger to check periodically for data changes and use new data as input for
the workflow. Push triggers listen to events on a designated listener. Recurrence triggers start a new
workflow according to a schedule that you define.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 5-7

To create a logic app from the Azure portal:

1. Click + Create a resource, click Web + Mobile, and then click Logic App.

2. On the Create logic app blade, fill in the following information, and then click Create. Configure the
following settings:

o Name. Enter a descriptive name.


o Subscription. Select your Azure subscription.

o Resource Group. Select an existing resource group or create a new resource group.

o Location. Choose the datacenter that is closest to your location.

o Log Analytics. Enable or disable redirection of logic app runtime logic events to Log Analytics
data store.

3. After you create the logic app, the Logic App Designer blade appears, where you will be prompted
to choose a template for the design of your logic app. If you want to design the logic app without
relying on an existing template, click Blank Logic App. Some examples of existing templates include:

o When a new file is created in Dropbox, copy it to OneDrive.

o Send an email when an item in a SharePoint list is modified.

o Deliver an SMTP email on new tweets.

Overview of API Apps


An API is a set of routines, protocols, and tools
that developers use for building software
applications. An API specifies how software
components should interact. APIs make building
blocks for developing apps. Developers often
build APIs that they can reuse in their projects.
API Apps is a hosted service that can help
developers build, host, and use APIs by using
popular development platforms, such as
ASP.NET, PHP, and Python. You can create an API
app by using the graphical interface in the Azure
portal, and integrate it with Visual Studio for
developing, debugging, and managing. When you create a new API app, Azure generates the code that
enables different SaaS applications, such as Office 365 and Salesforce, to use the API app. An API app has
integrated support for Swagger API metadata, which describes the API’s capabilities and generates the
client code for accessing the API by using languages such as, C#, Java, and JavaScript.

Note: Swagger is a popular framework for APIs that provides interactive documentation,
discoverability of created APIs, and the ability to generate client SDKs. For more information,
refer to the Swagger website at: http://aka.ms/R09mma
MCT USE ONLY. STUDENT USE PROHIBITED
5-8 Implementing Azure App Service

When integrating your API apps with Web Apps, you might need to implement Cross-Origin Resource
Sharing (CORS). In a common scenario, JavaScript runs within a browser targeting a web app that is
accessible via a different domain than the one that you assigned to the API app. By enabling CORS, you
allow JavaScript to make API calls to domains other than the domain from which the JavaScript code
originates. For example, the JavaScript code might be part of a webpage of a web app accessible via web
app at www.adatum.com and the API app referenced by the JavaScript is accessible via
customapi.azurewebsites.net.

To create and configure an API app, perform the following steps:

1. Create an API app:

1. Open the Azure portal, and then sign in to your subscription.

2. Click + Create a resource, click Web + Mobile, click See all, and then click API App.

3. On the API App blade, click Create.

4. On the API App blade, enter the following information, and then click Create:
o App name. Provide the unique name for your API app that, in combination with the Microsoft-
owned public domain namespace .azurewebsites.net suffix, will form the fully qualified domain
name of the API app.

o Subscription. Select the subscription in which you will provision the new API app.
o Resource Group. Select an existing resource group or create a new resource group.

o App Service plan/Location. Select an existing service plan or create a new App Service plan.

o Application Insights: Specify whether you want to enable Application Insights in your API app.
Data that Application Insights collects, helps you to detect and diagnose any functional and
capacity issues of the app and identify its usage patterns.

5. Configure your API app:


o In the Azure portal, select your API app.

o On the API app blade, click Quickstart.


o On the API app Quickstart blade, click the entry representing the development platform that
you intend to use. In this example, click ASP.Net.

o On the ASP.Net blade:


 Create a starter backend API. Select this option to generate a sample API app backend.
You will be able to download the resulting project and then open and customize it in Visual
Studio. Once you complete your customizations, you can publish the code to the API app
backend.
 Connect your client. You have two options:
o Select CREATE A NEW APP and then download a personalized Windows project that
you can open in Visual Studio and that is preconfigured to work with your new hosted
API.
o Select CONNECT AN EXISTING APP.
 Get the tools. You can download Visual Studio Community tools for developing APIs that
can run on a variety of platforms and devices.
6. Back on the API App blade, scroll down to the API section, and then click API definition.

7. On the API Definition blade, you can set the endpoint that provides Swagger 2.0 JavaScript Object
Notation (JSON) metadata.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 5-9

8. Generate the client code:

o Open your API app project in Visual Studio.

o In Solution Explorer, right-click your API app, point to Add, and then click REST API Client.

o In the Add REST API Client dialog box, ensure that the Swagger Url option is selected, and then
click Select Azure Asset.

o In the APP Service dialog box, ensure that you authenticated to the Azure subscription hosting
the API app, expand the resource group that contains the API app, and then click your API app.
o In the Add REST API Client dialog box, click OK. This will create a folder within your project that
includes the yourAPI.cs file. The file contains the REST API client code for your API app.

Overview of the App Service Environment


Business-critical apps often need to operate in
highly scalable and secure environments. You can
use the App Service Environment to
accommodate these requirements. App Service
Environment supports hosting web apps, mobile
apps, and API apps that require highly scalable
compute resources, isolation, and direct virtual
network connectivity.

At the time of writing this course, App Service


Environment (ASE) is available in two versions:
ASEv1 and ASEv2. Both versions offer the same
core capabilities for hosting highly scalable,
multitier workloads. However, there are several important differences between them, including the
following:
1. Deployment model. ASEv1 supports both classic and Azure Resource Management deployment
models. ASEv2 supports only the Azure Resource Manager deployment model.

2. Scaling capabilities. ASEv1 has the default maximum limit of 55 instances. With ASEv2, by default, you
can deploy up to 100 instances per subscription. In both cases, you can request a limit increase by
contacting Azure support.

3. Management model. Management overhead in ASEv1 is higher compared with ASEv2. For example,
you must manage IP addressing of instances and explicitly increase or decrease the number of
instances before you can scale their App Service plan. To scale with ASEv2, you simply change the
number of instances associated with the App Service plan, and the platform automatically handles
provisioning of instances and their IP address assignments.

4. Pricing model. The cost of ASEv1 corresponds directly to the total number of cores of Azure VMs that
form the environment. With ASEv2, there is a flat monthly cost, regardless of the number of instances,
in addition to per-core charges. ASEv1 App Service plans use the Premium pricing tier, while the
ASEv2 App Service plans use the Isolated pricing tier.

Note: You will learn more about App Service plans later in this module.
MCT USE ONLY. STUDENT USE PROHIBITED
5-10 Implementing Azure App Service

Both ASEv1 and ASEv2 consist of the front-end and worker tiers. The front-end handles incoming HTTP
and HTTPS requests, distributing them to the workers in a load-balanced manner. ASEv2 automatically
adjusts the number of front-end instances based on the number of worker instances that you specify
when scaling the corresponding App Service plan. By default, ASEv2 contains two front-end instances. For
every 15 worker instances, the platform automatically provisions an additional front-end instance. You can
change this ratio, but keep in mind that this will affect pricing.

ASEv1 contains up to three worker pools. For each pool, you can specify the number and size of Azure
VMs. There are four VM size options labeled P1 to P4. Each worker pool can host multiple App Service
plans, up to the total capacity of the worker pool. In ASEv2, the concept of worker pools no longer
applies. Instead, when you create an App Service plan and choose the corresponding Azure VM size, the
platform will provision the corresponding workers and automatically manage their number, according to
the App Service plan scale-out settings. In the case of ASEv2, there are three VM size options labeled I1
to I3.

In both ASEv1 and ASEv2, front-end and worker instances reside within the same subnet of a virtual
network. Apps that run as part of the App Service Environment communicate with each other within the
virtual network. You can use Network Security Group (NSG) rules assigned to the subnet in which you
provision the App Service Environment to restrict inbound and outbound connectivity.

Note: In both ASEv1 and ASEv2, you do not have direct access to the Azure VMs. For
example, you cannot connect to them via Remote Desktop Protocol (RDP). However, you have
the option of connecting to them via the Source Control Management (SCM) endpoint (also
referred to as Kudu) which provides a console view of the sandbox where individual ASE-hosted
web apps are running. You will learn about SCM later in this module.

You can use one of the following methods to create an ASEv2:

• Create a new App Service app with a corresponding service plan and App Service Environment via the
Azure portal.

• Create an App Service Environment without a corresponding App Service app via the Azure portal.

• Create an App Service Environment via an Azure Resource Manager template.

To create an ASEv2 and a corresponding web app via the Azure portal, use the following procedure:

1. Sign in to the Azure portal.

2. In the hub menu, click + Create a resource, click Web + Mobile, and then click Web App.

3. On the Web App blade, in the Name box, type the unique name for the web app that, in
combination with the Microsoft-owned public domain namespace .azurewebsites.net suffix, will
form the fully qualified domain name (FQDN) of the web app.

4. Select the Azure subscription where you want to create the ASE.

5. In the Resource Group box, select an existing resource group or specify the name of a new resource
group that you want to create.

6. Set OS to either Windows or Linux.

7. Click App Service plan/Location.

8. On the App Service plan blade, click Create New.


9. On the New App Service Plan blade, specify a custom name that you want to assign to the service
plan, select the Azure region where you want to deploy the web app, and then click Pricing tier.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 5-11

10. On the Choose your pricing tier blade, click one of the Isolated pricing tier stock keeping units
(SKUs), and then click Select.

11. Back on the New App Service Plan blade, in the App Service Environment Name box, type the
unique name for your ASE that, in combination with the Microsoft-owned public domain namespace
p.azurewebsites.net suffix, will form the FQDN of the ASE.

12. In the Virtual Network section, select an existing virtual network or create a new virtual network. If
you decide to use an existing virtual network, you will need to specify the name of a new subnet and
its IP address range. If you choose to create a new virtual network, you must specify its name. The
platform will provision it with the IP address space of 192.168.250.0/23 along with a single subnet
named default with the IP address range 192.168.250.0/24.

13. Click OK, and then click Create.

Note: At the time of authoring, you cannot use a pre-created subnet on an existing virtual
network when using the Azure portal to deploy an App Service Environment. Instead, the
platform will provision a new subnet during the deployment.

To create an ASEv2 via the Azure portal without a corresponding App Service app and without assigning
an App Service plan, use the following procedure:

1. Sign in to the Azure portal.


2. In the hub menu, click + Create a resource, click Web + Mobile, and then click App Service
Environment.

3. On the All Service Environment blade, in the Name box, type the unique name for your App Service
Environment that, in combination with the Microsoft-owned public domain namespace
p.azurewebsites.net suffix, will form the FQDN of the ASE.

4. Select the Azure subscription where you want to create the ASE.
5. In the Resource Group box, select an existing resource group or specify the name of a new resource
group that you want to create.

6. In the Virtual Network box, select an existing virtual network or create a new virtual network. If you
choose to create a new virtual network, you must provide its location. The new virtual network will
have the IP address space of 192.168.250.0/23 and a single subnet named default with the IP
address range 192.168.250.0/24.

7. As part of the virtual network configuration, specify the VIP Type (External or Internal). For an
external virtual IP (VIP), specify the number of IP addresses that the platform should assign to it. The
number can range between one and 10. You will need more than a single IP address if you intend to
use IP-based Secure Sockets Layer (SSL) web apps. For an internal VIP, specify the name of a Domain
Name System (DNS) subdomain. The name of the subdomain, combined with the name you assigned
to App Service Environment in step 3, will form the URL of ASE-hosted apps.

8. Click Create.

Additional Reading: For detail regarding deployment of ASEv2 via an Azure Resource
Manager template, refer to: “Create an ASE by using an Azure Resource Manager template” at:
https://aka.ms/w556u8

Question: You work as a developer for your organization, and management asks you to list
the major benefits of using App Service. How would you answer?
MCT USE ONLY. STUDENT USE PROHIBITED
5-12 Implementing Azure App Service

Lesson 2
Planning app deployment in App Service
Architects and developers can choose from a few hosting and deployment options when designing and
implementing web solutions in Azure. This lesson describes these options. You will learn about the App
Service plans that allow you to scale web, mobile, and API apps and provide the functionality that you
need at optimal cost. In addition, you will learn how the tools and source-code control systems that
developers use influence the choice of deployment methodology.

Lesson Objectives
After completing this lesson, you will be able to:

• Identify the differences between Web Apps, Azure Cloud Services, and web apps hosted on Azure
VMs.

• Identify the differences between the App Service pricing tiers.


• Describe the different methods of deploying and updating code in App Service apps.

Comparing Web Apps, Azure Cloud Services, and Azure VMs


To host a web application in Azure, you can use
several options, including Azure VMs, Web Apps,
or Azure Cloud Services. When deciding which
option to select, you must consider the level of
control, the flexibility and speed with which the
application can scale, and the programming
languages and frameworks that you want to use.

Note: The Azure Resource Manager


deployment model does not support Azure
Cloud Services.

Azure VMs
Because an Azure VM can host practically any web server, such as IIS or Apache, you can use it to host a
wide range of web applications. This scenario is like running a traditional web farm to host your web
application, except that the servers are in Azure datacenters, rather than on-premises. Therefore, Azure
VMs offer a straightforward migration path for on-premises web applications. This approach also
facilitates migration of any supporting servers, such as database servers, onto other VMs in the same or a
directly connected virtual network. To accommodate autoscaling requirements, you can use VM scale sets
in Azure.

If you choose to host a web application on Azure VMs, you have maximum control over their operating
system and supporting software components. For example, you can connect to them via RDP or Secure
Shell (SSH) and install a specific version of middleware or a development framework. On the other hand,
maintaining these components will require extra effort. In addition, unless you use Azure VM Scale Sets,
scaling out requires that you provision additional Azure VMs.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 5-13

Web Apps
Alternatively, you can choose to host your web application by using the Web Apps feature. After you
create a new web app, you can upload a custom web application code into it or deploy any of the ready-
to-use Azure Marketplace web applications, such as those that Drupal, WordPress, and Umbraco provide.

You can scale web apps vertically by modifying the pricing tiers of their service plan, which changes the
volume of workload that a single web app instance can handle. You can also scale web apps horizontally
by changing the number of instances and relying on Azure built-in load balancing to distribute the traffic
across them. However, unless you are using App Service Environment, you cannot scale individual
components of a multi-tier web app separately. You also cannot establish an RDP or SSH connection to
the virtual machine hosting a web app.

Cloud services
You can also deploy your web application as a cloud service. A cloud service consists of a web role, which
serves as the front end of an application, and one or more worker roles, which are responsible for running
background tasks. You can scale individual roles independently by specifying the number of role instances
in each, which gives you more control over scalability compared with web apps. Role instances run
Windows Server and support connectivity via RDP.
Platform as a service (PaaS) cloud services are unique to Azure. Existing web applications might require
significant modification before they can run as a cloud service.

Managing App Service plans


App Service provides an abstraction layer
between web apps and virtual machines on
which these web apps are running. An App
Service plan defines the capabilities and capacity
of virtual machine resources available to its apps.
Each plan is associated with a single subscription,
an Azure region, a resource group, a pricing tier,
an instance size, and an instance count. If a plan
contains multiple instances of virtual machines,
then every app that is part of this plan is running
on every instance. You choose how many App
Service plans to create and how to allocate App
Service apps to these plans. You make this decision based on the apps’ resource requirements and their
need to scale independently of each other.

When you create an app in App Service, choose an existing service plan or create a new one. Multiple
web, mobile, and API apps can share a single App Service plan, but they each must belong to one and
only one App Service plan.

When you create a new App Service plan, you must provide a unique name and select an appropriate
pricing tier and Azure region. You can move apps that you create in one service plan into another, should
they require different capacity and scaling options. You can scale an App Service plan to meet the
demands of apps by changing the plan’s pricing tier, instance size, or instance count.
At the time of writing this course, there are six pricing tiers available for all Windows Server–based Service
Apps: Free, Shared, Basic, Standard, Premium, Premium V2, and Isolated.
MCT USE ONLY. STUDENT USE PROHIBITED
5-14 Implementing Azure App Service

Note: There is a separate pricing tier for Azure Linux-based Service Apps and the
Consumption plan for Azure Logic Apps and Azure Functions.

The Free pricing tier


App Service plans in the Free pricing tier allow you to create a maximum of 10 web, mobile, or API apps,
and limit you to 1 gigabyte (GB) of storage. Plans in this tier do not support custom domain names, so all
app DNS names have the azurewebsites.net DNS suffix. You cannot scale out Free tier apps to multiple
instances, and they do not qualify for any availability service level agreement (SLA). Mobile apps in this
pricing tier can facilitate communication and offline synchronization with up to 500 devices per day. The
outbound traffic for apps is limited to 165 megabytes (MB) per day.

The Shared pricing tier


App Service plans in the Shared pricing tier have unlimited outbound data transfer and allow you to use
custom domains, although without Secure Sockets Layer (SSL) support. You cannot scale Shared tier apps
and they do not qualify for any SLAs. You can create up to 100 web, mobile, or API apps in the Shared
App Service plans. Other limits are the same as in the Free service tier, such as limits on storage capacity
and support for communication with mobile devices.

The Basic pricing tier


App Service plans in the Basic pricing tier provide up to 10 GB of storage. Additionally, they allow you to
use custom domains with SSL encryption. The Basic tier apps also qualify for the 99.9 percent uptime SLA,
and you can scale them up to three instances, with Azure load balancers distributing incoming
connections.

The Standard pricing tier


App Service plans in the Standard pricing tier provide up to 50 GB of storage and you can scale out apps
to 10 instances, with Azure load balancers distributing incoming connections. Standard tier apps support
up to five staging slots. They also integrate with Azure Traffic Manager, facilitating geo-distributed
deployments. You can also connect them to Azure virtual networks and to on-premises networks.

The Premium pricing tier


In addition to the features available in the Standard pricing tier, Premium App Service plans provide up to
250 GB of storage, support up to 50 instances, and can have up to 20 staging slots. This facilitates
deployment of enterprise-grade workloads that require a high degree of scalability.

The Premium V2 pricing tier


The Premium V2 pricing tier has most of the same characteristics as the Premium pricing tier, but it offers
twice as much memory per instance as the corresponding Premium tier instance sizes. There is also a
central processing unit (CPU) performance benefit, because instances are implemented as Dv2 series VMs,
which feature faster processors than their Premium pricing tier counterparts.

The Isolated pricing tier


Isolated pricing tier is available when deploying ASEv2. This provides you with the highest scalability with
up to 100 instances per service plan. Other features include integration with Azure virtual networks, 1
terabyte (TB) of storage, and the same CPU and memory characteristics as the corresponding Premium
instance sizes.

Additional Reading: For more information on App Service pricing tiers, refer to: “App
Service Pricing” at: http://aka.ms/Rgjtys
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 5-15

Note: At the time of authoring this content, App Service on Linux supports only Basic and
Standard pricing tiers. In addition, there are also additional considerations regarding using
existing App Service plans on Linux web apps:

• It is not possible to share the same App Service plan across Windows-based web apps and App
Service on Linux-based web apps.

• If Windows-based web apps and App Service on Linux–based web apps share the same resource
group, then the App Service plans for the Linux-based web app must reside in a different resource
group.

Comparing app deployment methods in App Service


Developers and web app administrators might
choose different approaches for deploying web,
mobile, and API apps. They often make these
decisions based on the location of the source
code. Individual developers typically store source
code on their computers, on which they run
integrated development environment (IDE) tools
that they use to write code. Developing code
collaboratively as part of a team often requires
the use of a source-control system. Examples of
such a system include Microsoft Team
Foundation Server (TFS) in an on-premises
environment and Visual Studio Team Services (VSTS) in the cloud.

Source code on client machines


If developers are not using a source-control system to coordinate development, they can deploy an app
to Azure directly from their chosen IDE tool, such as Visual Studio or WebMatrix via Web Deploy. They
also can use the command-line MSBuild tool, which integrates with Web Deploy, to script deployment
processes. It is also possible to perform code uploads by using FTP. However, Web Deploy offers extra
features, including the ability to update connection strings and to identify files to exclude from
subsequent uploads if their content has not changed. Another mechanism for deploying code is the Kudu
engine. Kudu supports version control, package restore, and webhooks for continuous deployment.

Source code in an on-premises source-control system


If developers are using a source-control system within their on-premises network, they can configure that
system to perform continuous deployment to an app service. The deployment should target a staging slot
to ensure that developers and testers can validate changes before they reach the production slot. On-
premises source-control systems include TFS, GitHub, and Mercurial repositories.
MCT USE ONLY. STUDENT USE PROHIBITED
5-16 Implementing Azure App Service

Source code in a cloud source-control system


If developers are using a cloud-hosted source-control system, such as Team Foundation Version Control
(TFVC) in VSTS, they can configure continuous deployment in a similar way to on-premises source-control
systems. There is a wide range of open-source solutions that extend the development and deployment
capabilities in this scenario. For example, developers can use Git as a distributed source-control system for
VSTS, instead of the centralized TFVC.

Additional Reading: For more information on deployment mechanisms, refer to: “Deploy
your app to Azure App Service” at: http://aka.ms/jyfupy

Question: Given the flexibility that you have when choosing an app-hosting model in Azure,
what key factors will influence your decision?
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 5-17

Lesson 3
Implementing and maintaining web apps
Web designers and developers can create web applications by using a variety of tools, such as graphic
design packages, image-editing packages, web design software, and IDEs, such as Visual Studio. Similarly,
there are many methods for packaging and deploying a web application to Web Apps. In this lesson, you
will learn about those methods. You will find out how to create Azure web apps and how to deploy and
update web application code by relying on Visual Studio and source-control software.

Lesson Objectives
After completing this lesson, you will be able to:

• Explain how to create a new web app in Azure by using the Azure portal, Azure PowerShell, and
Azure Command Line Interface (Azure CLI).

• Explain how to use Web Deploy to deploy a web app to Azure from Visual Studio.
• Explain how to deploy updates to an existing web app.

Creating web apps


If you want to run a web application by using
Web Apps, you must create a new Web Apps
instance. This allows you or the developers to
deploy the web application’s code and content
into it. You can create this instance by using
several methods, including the Azure portal,
Azure PowerShell, Azure CLI, and Azure Resource
Manager templates.

You can also create a new Web Apps instance


and deploy your app into it directly from Visual
Studio. Make sure to include Azure SDK for .NET
as part of your Visual Studio installation. This will
provide you with access to graphical tools, command-line utilities, and client libraries that simplify the
process of deploying apps to Azure.

Creating new web apps in the Azure portal


To create a new web app or App Service for Linux apps in the Azure portal, perform the following steps:

1. On the toolbar to the left, click + Create a Resource, select the Web + Mobile link, and then click
Web App.

2. On the Web App blade, in the App name text box, type a unique and valid name. If the name is
unique and valid, a green check mark appears. This name, along with the azurewebsites.net suffix,
will become the DNS name of the web app.

3. In the Subscription drop-down list, select your subscription.

4. In the Resource Group section, select an existing resource group or specify the name of a new
resource group that you want to create.

5. Choose either a Windows or Linux operating system on which the web app will run.
MCT USE ONLY. STUDENT USE PROHIBITED
5-18 Implementing Azure App Service

6. In the App service plan/Location section, select an existing plan or create a new App Service plan,
and then select the Azure region where you want your web app to run.

7. If you selected Windows as the operating system, specify whether to enable Application Insights. If
you selected Linux as the operating system, select Runtime Stack from the drop-down list box.

8. Click Create. Azure then creates the new web app.

Creating new web apps by using Azure PowerShell


To create a new web app by using Azure PowerShell, use the following script:

New-AzureRmResourceGroup –Name AdatumRG –Location eastus


New-AzureRmAppServicePlan –Name AdatumStandardPlan –ResourceGroupName AdatumRG –Location
eastus –Tier Standard –WorkerSize Small –NumberofWorkers 2
New-AzureRmWebApp –ResourceGroupName AdatumRG –Name WebAppName –Location eastus –
AppServicePlan AdatumStandardPlan

Creating new web apps by using Azure CLI


To create a new web app by using Azure CLI, use the following sequence of commands:

az group create –-location eastus –-name AdatumRG


az appservice plan create –-name AdatumStandardPlan –-resource-group AdatumRG --location
eastus –-sku S1 --number-of-workers 2
az webapp create –-resource-group AdatumRG –-name WebAppName –-plan AdatumStandardPlan

Creating new Web App for Containers apps by using Azure CLI
To create new Web App for Containers apps by using Azure CLI, use the following sequence of
commands:

az group create –-location eastus –-name AdatumRG


az appservice plan create –-name AdatumStandardPlan –-resource-group AdatumRG --location
eastus –-sku S1 --number-of-workers 2 --is-linux
az webapp create –-resource-group AdatumRG –-name WebAppName –-plan AdatumStandardPlan --
deployment-container-image-name publisher/image:tag

Creating a web app project and publishing it to a new web app by using Visual
Studio
1. Open Microsoft Visual Studio.
2. On the File menu, click New, and then click Project.

3. In the New Project dialog box, expand Installed > Templates > Visual C# > Web, and then select
the ASP.NET Web Application (.NET Framework) template.

4. In the New Project dialog box, enter the following information, and then click OK:

o Name. Provide a name for the project.

o Location. Provide a location to store the new project files.

o Solution name. Provide a name for the solution.

5. In the New ASP.Net Web Application dialog box, select the MVC template.

6. On the right side of the dialog box, click Change Authentication.

7. In the Change Authentication dialog box, ensure that No Authentication is selected, and then
click OK.

8. In the New ASP.Net Web Application dialog box, click OK.

9. In Visual Studio, in Solution Explorer, right-click your project, and then click Publish.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 5-19

10. Ensure that the Microsoft Azure App Service icon and the Create New option are selected, and
then click Publish.

11. In the Create App Service window, click Add an account.

12. When prompted, specify the user name and password of an account with sufficient permissions to
create a new Web Apps instance, and then click Sign in.

13. Back in the Create App Service window, specify the following settings:

o Web App Name. Provide a unique name for your web app that will be appended with the
Microsoft-owned public domain azurewebsites.net.
o Subscription. Select your subscription.

o Resource Group. Select an existing resource group or specify the name of a new resource group
that you want to create.

o App Service Plan. Select an existing plan or create a new service plan by choosing the name of
the Azure region where you want to run your app, the pricing tier, and an instance size.

14. To complete the creation of the web app in your Azure subscription, click Create.

Deploying web apps


You can deploy your web apps by using several
methods, such as copying files manually by using
FTP, or synchronizing files and folders to App
Service from a cloud storage service, such as
OneDrive or Dropbox. App Service also supports
deployments by using the Web Deploy
technology. This approach is available with Visual
Studio, WebMatrix, and Visual Studio Team
Services.

If you want to perform deployments by using


Git or FTP, you must configure deployment
credentials. Knowledge of deployment
credentials will allow you to upload the web app’s code and content to the new web app, to make it
available for browsing.

Web Deploy
Web Deploy is a technology with client-side and server-side components. It allows you to synchronize
content and configuration metadata of web apps residing on IIS servers. You can use Web Deploy to
migrate content from one IIS web server to another, or you can use it to deploy web apps from
development environments to staging and production web servers.
The server-side components of Web Deploy require the IIS web platform. The client-side components are
available with a few Microsoft development tools, including Visual Studio and WebMatrix. Web Deploy
offers several advantages, including the following:

• Uploading only files that have changed. This minimizes upload times and the volume of network
traffic.
• Support for HTTPS protocol. This eliminates the need to open additional ports on a web server’s
firewall.

• Support for access control lists (ACLs). This further secures the target web server.
MCT USE ONLY. STUDENT USE PROHIBITED
5-20 Implementing Azure App Service

• Support for SQL scripts. This makes it possible to set up a database as part of a deployment.

• Controlling web app configuration by modifying its web.config file. This allows you, for example, to
replace a database-connection string so that the web app that you deploy connects to a production
database, rather than a development database.

To use Visual Studio to deploy your project as an Azure web app, follow these steps:

1. In Visual Studio, open your project that contains the MVC application that you plan to deploy in
Azure.

2. In Visual Studio, in Solution Explorer, right-click your project, and then select Publish.

3. Ensure that the Microsoft Azure App Service icon and the Select Existing option are selected, and
then click Publish.
4. In the App Service dialog box, sign in to your Azure subscription, select your subscription, the
resource group containing the Azure web app, and the web app, and then click OK.

5. Upon a successful deployment, the updated web app will appear on a new tab within the Visual
Studio interface.

MSDeploy.exe
The Web Deploy client is available as a command-line tool, MSDeploy.exe. Visual Studio, WebMatrix, and
PowerShell cmdlets use this tool to execute Web Deploy operations.

Additional Reading: To download the MSDeploy.exe tool, refer to: “Web Deploy 3.6” at:
http://aka.ms/Fir58l

Setting up deployment credentials


If you use FTP or Git to deploy a web application’s content and code to an Azure web app, you cannot use
your Azure account credentials to authenticate. Instead, you must set up deployment credentials. To do
this in the Azure portal, perform the following steps:

1. In the hub menu on the left side, click All services, and then click App Services.

2. Click the web app for which you want to set up deployment credentials.
3. On the web app blade, click Deployment credentials.

4. On the Deployment credentials blade, in the FTP/deployment username text box, type the name
of the user you intend to create.
5. In the Password text box, type the password.

6. In the Confirm password text box, type the same password, and then click Save.

Downloading a publishing profile


You can generate a publishing profile for each web app that you create. This profile is an XML file with the
.publishsettings extension, which you can download from the Azure portal. It includes all the credentials,
connection strings, and other settings that are required to publish a web app from an IDE such as Visual
Studio.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 5-21

Automating web app deployment by using Azure PowerShell, Azure CLI, and Git
You can use a variety of scripting techniques to automate the deployment process. For example, to
publish a web application project from a local Git repository to myWebApp in a resource group named
myResourceGroup, you could run the following Windows PowerShell script:

$gitRepoPath = ‘F:\Repos\myWebApp’
$webAppName = ‘myWebApp’
$propertiesObject = @{
scmType = ‘LocalGit’;
}
Set-AzureRmResource -PropertyObject $propertiesObject -ResourceGroupName myResourceGroup `
-ResourceType Microsoft.Web/sites/config -ResourceName $webAppName/web `
-ApiVersion 2015-08-01 -Force
$xml = [xml](Get-AzureRmWebAppPublishingProfile -Name $webAppName `
-ResourceGroupName myResourceGroup -OutputFile null)
$username =
$xml.SelectNodes("//publishProfile[@publishMethod=`"MSDeploy`"]/@userName").value
$password =
$xml.SelectNodes("//publishProfile[@publishMethod=`"MSDeploy`"]/@userPWD").value
git remote add azure "https://${username}:$password@$webappname.scm.azurewebsites.net"
git push azure master

You can accomplish the same objective by using Azure CLI:

gitrepopath = E:\Repos\myWebApp
username = myWebAppUser
password = Pa55w.rd1234
webappname = myWebApp
az webapp deployment user set --user-name $username --password $password
url = $(az webapp deployment source config-local-git --name $webappname \
--resource-group myResourceGroup --query url --output tsv)
cd $gitrepopath
git remote add azure $url
git push azure master

Deploying a web app by using FTP


FTP is an older protocol that is commonly used for uploading web applications to web servers.

FTP clients
FTP clients include:

• Web browsers. Many web browsers support FTP and HTTP. This means that you can use your web
browser to browse FTP sites and upload content. However, advanced features, such as automatic
retries in case of dropped connections, are not available in most browsers.

• Dedicated FTP clients. Several dedicated FTP clients are available as free downloads. The most
popular ones include FileZilla, SmartFTP, and Core FTP. Their advanced features, such as the ability to
handle hundreds of files, make them suitable for web app deployment.

• IDEs. Visual Studio and other IDEs support FTP for web app deployment.

Configuring an FTP transfer


To deploy a web app by using FTP, you must configure your client with the destination URL of the remote
FTP server and the credentials that FTP can use to authenticate. These are the Azure web app deployment
credentials. In addition, you must choose either the active or the passive FTP mode.

By default, FTP uses active mode. In this mode, the client initiates the session and issues commands from a
random port (N) targeting a command port on the server (usually TCP port 21). The client also starts
listening on the next consecutive port (N+1) for the server’s response. The FTP server initiates a
connection to the client from its data port (usually TCP port 20) targeting port N+1. The client uses this
MCT USE ONLY. STUDENT USE PROHIBITED
5-22 Implementing Azure App Service

new connection to perform an upload. The primary issue with active mode is that client-side firewalls
typically block inbound connections to random ports. In passive mode, the first part of the
communication between the client and the server is the same as in the active mode. However, in this case,
the server responds with a random port and the client initiates an outbound connection to that port. This
addresses the problem with client-side firewall restrictions on inbound connections.

Limitations of FTP
The main advantage of FTP is its wide use and broad compatibility. However, because FTP is an older
technology, not designed for uploading web apps’ code, it does not offer advanced features that are
available with Web Deploy. For example:

• FTP only transfers files. It cannot modify files or distinguish their use as part of the transfer. Therefore,
it cannot automatically alter the database connection strings in web.config.

• FTP always transfers all files that you select, regardless of whether they have been modified.

Updating web apps


App development typically continues even after
you deploy an app to Azure. Developers add new
features and fix bugs to improve the app and
optimize the user experience. How you
implement these changes depends on the
location of the web app source code and the
deployment tool that you choose.

If you use FTP for deployment, you should


upload new files, overwriting their older versions
at the destination. Since FTP cannot
automatically detect file changes, you must
either identify the files to update yourself or
upload all files that a web app includes. If you take the second approach, even a small update requires a
lengthy upload operation. If you use Web Deploy, MSDeploy.exe compares the files in the source and
destination, and then uploads only the modified files.

Continuous deployment and delivery


The continuous delivery model is a set of procedures and practices that optimize the process of
implementing development changes to code in a production environment. It does so while minimizing
risks associated with these changes. Continuous deployment is part of the continuous delivery model. It
involves regular and automatic builds and deployments of a project to a staging environment. If you
develop a web app by using a centralized source-control system, such as TFS or GitHub, you can
configure continuous deployment of that web app to Azure, on an automated schedule or in response to
any committed changes.

To enable and use continuous deployment, you must:

1. Connect the project to a web app. In the Azure portal, you must configure the location of your
source-code repository and provide credentials that the Azure web app can use to authenticate with
the repository.

2. Make one or more changes to the source code, and then commit them to the repository.

3. Trigger a build and deploy operation.

The precise steps involved in this configuration depend on the repository that you are using.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 5-23

Additional Reading: For more information regarding continuous deployment to Azure


App Services, refer to: “Continuous Deployment to Azure App Service” at: https://aka.ms/worjdb
and “Continuous deployment with Web App for Containers” at: https://aka.ms/Qt1y7r

Staging and production slots


Before you deploy an updated code to a production Azure web app, you should ensure its integrity and
reliability. Therefore, it is important to implement a strict testing and quality assurance regime that
identifies any issues before the update takes effect in the production environment. You can perform much
of this testing in the development environment. For example, you can run unit tests on developers’
computers. However, the final testing location should be the staging environment. The staging
environment should match the production environment as closely as possible.

If you are using the Standard tier web apps, you can create up to five slots for each app. With the
Premium tier, this number increases to 20. You use the production slot to host the fully tested and verified
web app code. Additional slots provide the staging and testing environments. You can deploy new code
to one of the staging slots, and then use it to run acceptance tests. Each slot has a unique URL, different
from the production slot.

When the new app version in the staging slot passes all tests, you can deploy it to production by
swapping the slots. This not only simplifies the deployment process but also provides a convenient
rollback path. If the new version causes unexpected problems, you can swap the slots again to return the
web app to its original state.

Best Practice: If you are using continuous deployment, you should not configure it to
deploy the code to a production web app. This might lead to insufficiently tested code in a user-
facing environment. Instead, you can configure deployment to a staging slot or a separate web
app, where you can run tests before final deployment.

When you swap a production and staging slot, by default, the values of the following settings in the
staging slot replace the values of the same settings in the production slot:

• General settings such as framework version and web sockets

• App settings
• Connection strings

• Handler mappings

• Monitoring and diagnostic settings

• WebJobs content

You can designate individual app settings and connection strings for a specific slot. This ensures that these
settings do not change following a slot swap. You can enable this functionality directly from the Azure
portal by selecting the Slot setting check box that appears next to each app setting and connection
string entry on the Application settings blade.

The following production slot settings do not change when you swap a staging slot into a production slot:
• Publishing endpoints

• Custom domain names

• SSL certificates and bindings


MCT USE ONLY. STUDENT USE PROHIBITED
5-24 Implementing Azure App Service

• Scale settings

• WebJob schedulers

Although staging slots are available publicly, their URLs are different from the production web app, so
internet users are unlikely to connect to them. However, in some scenarios, you might want to restrict
access to your staging slot so that only your developers and the testing team can access it. You can do
this by adding the IP address whitelist to the web.config file of the web app.

Note: You can perform a swap with preview. This applies slot-specific configuration from
the destination slot to the source slot but does not perform the swap right away. Instead, you
must complete the swap explicitly. This allows you to ensure that the swap takes place after the
source slot is fully operational. This approach eliminates the impact on web app responsiveness
during a short period in which compute resources are allocated to the source slot. This delay is
referred to as the warm-up period.

Updating Web App for Containers


To update Web App for Containers, you must first update your custom Docker container image and then
push it to either a public or private repository from which you deployed your app. After completing this,
you simply restart the web app.

Demonstration: Deploying web apps


In this demonstration, you will see how to:

• Create a new .NET Core web app by using Visual Studio.

• Create a new Azure web app by using Azure CLI.


• Publish the web app from Visual Studio.

Demonstration Steps

Question: What are the benefits of deployments slots and how can you move your web app
between different slots?
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 5-25

Lesson 4
Configuring web apps
After you create and deploy a web app, you can customize the way it operates by modifying its
configuration. For example, you can configure SSL certificates to support encryption, specify databases
and storage accounts to provide persistent storage and scale the web app to address changing demand.
In this lesson, you will learn how to configure a web app for optimal performance and cost efficiency. You
will also find out how to use WebJobs to implement scripts that process web app background tasks.

Lesson Objectives
After completing this lesson, you will be able to:

• Configure a web app’s application and authentication settings.

• Configure virtual networks and hybrid connectivity for web apps.

• Scale web apps.


• Create WebJobs to run background tasks.

Configuring a web app’s application and authentication settings


After you create your web app, you can
configure the following settings on the
application settings blade of the web app in the
Azure portal:

• Framework versions. Use this setting to


select from the supported development-
framework version. Server-side code that
executes to render webpages requires a
framework, which developers select when
developing a web app. Azure web apps
running on Windows instances support the
ASP.NET, PHP, Java, and Python frameworks.
With Azure Web App on Linux, you can also build apps by using .Net Core and Ruby.

• Platform. Use this setting to control whether to run the server code in 32-bit or 64-bit mode. The 64-
bit mode is available only for Basic, Standard, Premium, and Premium V2 tier web apps.

• Web Sockets. Use this setting to enable web sockets, which allow for two-way communication
between a server and a client. Developers can build chat rooms, games, and support tools that
benefit from web sockets.

• Always On. Use this setting to retain the app’s code in memory even if the web app is idle. This
eliminates the need to reload the code in response to new requests, following a period of inactivity.
This improves web app responsiveness, resulting in an improved user experience. The AlwaysOn
feature is available only for web apps in the Standard and Premium tiers.

• Managed Pipeline Version. Use this setting to assign either the integrated or classic mode to the
web app. An application pool that is running in the integrated mode benefits from the integrated
request-processing architecture of IIS and ASP.NET, so this is the default mode for new web apps.
Legacy apps that run in the classic mode, which is equivalent to the IIS 6.0 worker-process isolation
mode, use separate processes for IIS and ASP.NET, with duplicate processes for authentication and
authorization.
MCT USE ONLY. STUDENT USE PROHIBITED
5-26 Implementing Azure App Service

• ARR Affinity. Use this setting to improve load balancing of stateless web apps. Turning it off disables
the Application Request Routing (ARR)–based affinity cookie mechanism. When dealing with stateful
web apps, you should turn on this setting.

• Auto Swap. Use this setting to enable automatic swap between the production and staging
environments each time you upload new updates to the staging slot.

• Debugging. Use this setting to enable remote debugging and select the version of Visual Studio that
you intend to use during debugging sessions.

• App Settings. Use this setting to pass custom name/value pairs to your application at runtime. Work
with your development team to determine what settings the web app’s code requires. For example,
you can use an app setting to specify an administrator’s email address. The web app’s code could use
this setting to dynamically generate the site’s content.

• Connection Strings. Use this setting to enable the web app to connect to a data service, such as a
database, a caching server, an event hub, or a notification hub. Most web apps use an external data
service to store or consume data. You can use this setting to override static connection strings defined
in configuration files such as web.config.

• Default Documents. Use this setting to specify the pages that display by default when users connect
to your web app by using its DNS name. Work with your developers to ensure that the web app’s
home page appears in the default documents list. Optimize the web app by ensuring that the home
page is at the top of the list.
• Handler mappings. Use this setting to designate custom script processors that handle processing of
files with specific extensions, such as .php or .asp. To add a custom script processor, provide its path
and any additional command-line switches.

• Virtual applications and directories. Use this setting to add additional virtual applications and
directories to your web app by specifying their physical paths.

Diagnostics logs
You can access the diagnostics settings for a web app by clicking Diagnostics logs on the web app blade.
On the resulting blade, you can configure application logging. You have the option of storing logs directly
in the file system on the VM hosting the web app or in a storage account that you designate. You can also
configure the collection of web server logs, detailed error messages, and traces of failed requests.

Custom domain names


If you have registered a custom DNS domain name, such as adatumcorp.com, with a domain registrar,
you can assign that name to your Azure web app. Each Azure web app has a default name in the
azurewebsites.net namespace. The use of custom domain names is available starting with the Shared
pricing tier.
To assign a custom domain name to your Azure web app, in your DNS registrar, create a canonical name
(CNAME) resource record mapping to the web app’s default name. Alternatively, you can create an A
resource record that maps the custom domain name to the public IP address of the web app. If you are
migrating an existing web app to Azure, either option will result in temporary downtime corresponding to
the time it takes to verify the ownership of the custom DNS domain. To avoid this downtime, you can
verify your domain ownership ahead of time by creating a domain verification record in the format
awverify.yourdomain, which maps to awverify.yourwebapp.azurewebsites.net.

Additional Reading: For details regarding migrating active DNS names to Azure App
Service, refer to: “Migrate an active DNS name to Azure App Service” at: https://aka.ms/gzgvjd
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 5-27

Certificates
If you want to use SSL to encrypt communications between the web browser and an Azure web app, you
must obtain and upload a certificate from a publicly recognized certificate authority. Use the web app’s
SSL certificates blade in the Azure portal to perform an upload. To use SSL with a custom domain, you
must ensure that the custom domain name matches either the Subject Name of the certificate or one of
the entries of its Subject Alternative Name property. After you upload the certificate, you can bind it to
the custom domain by using the SSL bindings section of the web app’s SSL certificates blade.

The following is the process for enabling HTTPS for a custom domain:

1. Create your SSL certificate that includes your custom domain name as the value of the Subject Name
or Subject Alternative Name property of the certificate. You also can use a wildcard certificate for this
purpose.

2. Assign either the Standard, Premium or Premium V2 pricing tier to the service plan of the web app,
because only these tiers allow the usage of HTTPS with a custom domain.

3. Configure SSL for the web app by uploading the certificate and adding a corresponding SSL binding.

4. Enforce HTTPS for the web app (optionally) by configuring the URL Rewrite module, which is part of
App Service. URL Rewrite redirects incoming HTTP requests via an HTTPS connection. You also have
the option of enforcing HTTPS by enabling the HTTPS Only setting on the Custom domains blade
of the web app.

Note: For more information on how to enable HTTPS for an app in App Service, refer to:
“Bind an existing custom SSL certificate to Azure Web Apps” at: http://aka.ms/X0xh9y

Configuring authentication and authorization in App Service


You can integrate web apps that require authentication and authorization with Azure AD or with on-
premises Active Directory Domain Services (AD DS) by using Active Directory Federation Services (AD FS).
Azure AD authentication supports OAuth 2.0, OpenID Connect, and SAML 2.0 protocols. If you configure
your Azure AD to synchronize directories with your on-premises AD DS, you can achieve a single sign-on
(SSO) experience for AD DS users when they access your web app in Azure. Furthermore, for
authentication, you can configure other cloud authentication providers, such as Microsoft accounts,
Facebook, Twitter, or Google.

Advanced configuration of web apps by using ApplicationHost.config


You can use XML Document Transformation (Xdt) declaration in the ApplicationHost.config file to
control additional configuration for your web app. For example, you can configure custom environment
variables, add additional applications, define the runtime environment, and configure Azure site
extensions.

Additional Reading: For more information on how to use Xdt transform samples, refer to:
“Xdt transform samples” at: http://aka.ms/Rkzucb

Note: At the time of authoring of this content, App Service on Linux supports a relatively
small subset of application and configuration settings available to Windows-based web apps.
App Service on Linux does not support integration with Azure AD and third-party identity
providers or IIS-specific options, such as managed pipeline, Web Sockets, or handler mappings.
MCT USE ONLY. STUDENT USE PROHIBITED
5-28 Implementing Azure App Service

Configuring virtual network connectivity and hybrid connectivity


Web apps and mobile apps might require a
connection to services that you implemented by
using Azure VMs. In such cases, you can connect
App Service to the virtual network to which the
Azure VMs are connected. With virtual network
connectivity in place, apps can communicate with
Azure VMs that contain databases and web
services by using private IP addresses, eliminating
the need to expose Azure VMs to the internet.

The first lesson of this module presented the App


Service Environment feature, which allows you to
deploy App Service apps directly into a virtual
network. This is a high-end solution that requires the Isolated pricing tier. With Standard, Premium, and
Premium V2 pricing tiers, you have the option of connecting App Service apps to a virtual network via
Point-to-Site (P2S) VPN. You can use this option if your bandwidth and latency requirements fall within
the performance range of a P2S VPN gateway. You must deploy a P2S VPN gateway into the virtual
network to support this solution.

To enable virtual network integration for your app, perform the following steps:
1. Sign in to the Azure portal, and then select the web app for which you want to configure virtual
network integration.

2. On the web app blade, click the Networking link.


3. In the VNET Integration section, click the Setup link.

4. On the Virtual Network blade, select an existing virtual network or create a new virtual network.
Note that the virtual network must have a virtual gateway to support P2S VPN. If you choose to
create a new virtual network, the platform will automatically provision a new gateway.

If you plan to connect App Service apps to on-premises resources, you can use hybrid connections. This is
possible without opening any inbound ports on the perimeter of your on-premises network, if the target
resource listens on a specific IP address and TCP port combination. One common scenario that leverages
this capability is connectivity to on-premises SQL Server instances.

From the architectural standpoint, a hybrid connection relies on the Azure Service Bus Relay residing in
Azure and a Hybrid Connection Manager (HCM) that you must install in your on-premises environment.
HCM requires direct connectivity to the resource you want to make accessible from App Service apps.
HCM also must be able to reach Azure via TCP ports 80 and 443.

To create a hybrid connection with your apps, perform the following steps:

1. Sign in to the Azure portal, and then select the web app for which you want to configure hybrid
integration.
2. On the web app blade, click the Networking link.

3. In the Hybrid Connections section, click the Configure your hybrid connection endpoints link.

4. On the Hybrid connections blade, click Add hybrid connection.

5. On the Add hybrid connection blade, click Create new hybrid connection.

6. On the Create hybrid connection blade, in the Hybrid connection Name text box, type a name
that will uniquely identify this connection.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 5-29

7. In the Endpoint Host text box, type the fully qualified domain name (FQDN) of the on-premises
resource.

8. In the Endpoint Port text box, enter the static port for the on-premises resource to which you want
to connect.

9. In the Service Bus namespace, select either the Create new or Select existing option.

10. In either case, you will need to provide the name and the Azure region of the Service Bus
namespace.

11. Click OK to confirm the creation of the hybrid connection.

12. After the hybrid connection is created, click it to configure connectivity.

13. On the Hybrid connection blade, click Download connection manager.

14. Follow the setup to install Hybrid Connection Manager on the on-premises Windows computer with
direct connectivity to the resource that you want to make available to your App Service apps.

Note: At the time of authoring of this content, App Service on Linux does not support
virtual network integration.

Configuring availability and scalability


The scaling options for Azure web apps depend
on the pricing tier of their service plan. For the
Basic tier, you can increase only the size of an
individual instance or the number of instances.
For the Standard and Premium, and PremiumV2
tiers, you can also configure automatic scaling.
This involves specifying a metric that will trigger
an increase or decrease in the number of
instances when it reaches a threshold that you
define. You can also scale Standard, Premium or
Premium V2 service plan web apps based on a
schedule, which can be helpful if you know when
to expect fluctuations in demand. Free and Shared pricing tiers do not offer support for horizontal scaling.

Additional Reading: For more information on scaling web apps, refer to: “Scale up an app
in Azure” at: http://aka.ms/Vaut94

To configure scaling for a web app, perform the following steps:

1. In the Azure portal, click the web app that you want to configure.

2. On the web app blade, click Scale Up (App Service plan).

3. In the Choose your pricing tier box, select Basic to configure simple static scaling. If you want to use
automatic scaling, select Standard, Premium, or Premium V2.
4. On the web app blade, click the Scale Out (App Service plan) link.

5. On the Scale out blade, you can scale out by selecting a larger Instance Count in the Override
condition section.
MCT USE ONLY. STUDENT USE PROHIBITED
5-30 Implementing Azure App Service

6. For Standard, Premium, and Premium V2 tier web apps, you can configure automatic scaling. To start,
click Enable autoscale, and then configure one or more scale conditions. There are two types of scale
conditions:

o Scale based on a metric. This involves specifying the following parameters:


 One or more rules. Each rule relates to a specific metric, such as CPU Percentage, Memory
Percentage, Disk Queue Length, Http Queue Length, Data In, and Data Out. You provide
additional criteria, such as time aggregation, threshold, and duration, that determine when
the rule takes effect.
 Instance limits. The limits dictate the minimum, maximum, and default number of instances.
 Schedule. This determines when evaluation of the rule should occur.
o Scale to a specific instance count. This involves specifying the following parameters:
 Instance count. This represents the number of instances that should be active when the scale
condition is in effect.
 Schedule. This determines when the scale condition should apply.

Best Practice: When using schedule for scaling instances, be aware that it can take several
minutes for each instance to start and become available to users. Therefore, ensure that you
allocate enough time between the schedule’s start and the point when you expect a change in
the utilization of web apps that are part of the same service plan.

Implementing WebJobs
The WebJobs feature of App Service enables you
to run automated background tasks in two
different ways:
• Continuously. Tasks continuously re-execute
their main method. For example, a task
might continuously check for the presence of
new files to process.

• Triggered. Tasks execute in two ways:

o Scheduled. Tasks run at times that you


specify.

o Manual. Tasks run whenever you decide


to execute them.
You can use WebJobs for maintenance tasks that do not involve web app content delivery to users and
that you can schedule outside of web app peak usage times. For example, these tasks might include
image processing, file maintenance, or aggregation of Really Simple Syndication (RSS) feeds.

Best Practice: By default, web apps unload and stop after prolonged periods of inactivity.
This also interrupts any WebJobs in progress. To avoid these interruptions, enable the Always On
feature.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 5-31

You specify the operations that a WebJob performs by creating a script file. This file can be a:

• Windows batch file

• Windows PowerShell script

• Bash shell script

• PHP script

• Python script

• Node.js script

The type of script that you create depends on your own preferences. For example, if you are a Windows
administrator with little web development experience, you might want to code WebJob operations as an
Azure PowerShell script, rather than as a Node.js script.

Creating a WebJob
To create a WebJob, first compress your script file and any supporting files that it requires into a .zip file,
and then perform the following steps:

1. In the Azure portal, navigate to the blade of the web app that you want to configure with a WebJob.

2. On the web app blade, click the WebJobs link.

3. On the WebJobs blade, click Add.

4. On the Add WebJob blade, in the Name text box, type a name that will identify the new WebJob.
5. Click the folder icon next to the File Upload text box.

6. In the Open dialog box, browse to the script file that you created, and then click Open.

7. In the Type drop-down list, select Continuous or Triggered. If you select Triggered, you can specify
the type of trigger as either Scheduled or Manual. For scheduled triggers, you must provide a Cron
expression that defines your schedule.

8. If you selected the Scheduled type, then in the Scale drop-down list, select Multi Instance or Single
Instance. The multi-instance option will scale your WebJob across all instances of the web app. The
single-instance option will result in a single WebJob.

9. To finish creation of the WebJob, click OK.

Viewing the WebJob history


The WebJob history provides information about when the WebJob was run and the result of the script
execution. To access the history, perform the following steps:
1. In the Azure portal, click the web app that runs the WebJob, and then click WebJobs.

2. Select the relevant WebJob, and then click Logs. This will open a web browser window displaying the
WebJob page. This page contains the name of the WebJob, the execution status, its duration, and
the last time the job was run.

3. To see further details, click the name of the WebJob, click the entry in the TIMING column, and then
click Toggle output. This displays individual events throughout the execution of the WebJob.

4. To download a text file containing the output, click the download link.
MCT USE ONLY. STUDENT USE PROHIBITED
5-32 Implementing Azure App Service

Demonstration: Configuring web app settings and autoscaling and


creating a WebJob
In this demonstration, you will see how to:

• Configure web app settings.

• Configure autoscaling.

Question: In what ways can you configure WebJobs to run?


MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 5-33

Lesson 5
Monitoring web apps and WebJobs
Running web apps consume resources, incur costs, and can generate errors. For example, a web app
might display an error in response to users’ requests for webpages that do not exist. Azure provides
insight into your web app’s behavior by making available a range of diagnostic logs, troubleshooting
tools, and monitoring tools. In this lesson, you will see how to configure logging for your web app, and
how to use the most popular troubleshooting and monitoring tools.

Lesson Objectives
After completing this lesson, you will be able to:

• Configure site diagnostics and application diagnostics to track a web app’s behavior.

• Identify the different ways to monitor web apps.

• Use the Kudu user interface to access further information about your web app.

Configuring application and site diagnostics


To troubleshoot a web app’s errors or identify
ways to improve its performance, you must
gather information about its behavior. One way
to gain better understanding of the way a web
app operates is to collect application diagnostics
and site diagnostics data.

Best Practice: Enable site diagnostics and


application diagnostics to record detailed
information only when you are investigating a
web app’s behavior. When you complete your
investigation and want to tune your web app for
optimal performance, minimize the amount of information the diagnostic tools log, because
logging has a small but measurable performance impact.

Application logging
Application logging makes it possible to capture individual events that occur as the web app code
executes. To record such an event, developers include references to the System.Diagnostics.Trace class
in the web app code. Developers frequently use this approach to generate trace messages, helpful in error
handling or verifying a successful operation.
Application logging is turned off by default, which means that trace messages are not recorded. If you
switch on application logging, you must configure the following settings by clicking the Diagnostics logs
link on the web app blade:

• Log storage location. Choose whether to store the application diagnostic log in the file system of
the web app instance or a blob container in an Azure Storage account. You can choose to enable
either one or both locations.

• Logging level. Choose whether to record informational, warning, error, or verbose messages in the
log. The verbose logging level records all messages that the application sends. You can configure a
different logging level for each log storage location.
MCT USE ONLY. STUDENT USE PROHIBITED
5-34 Implementing Azure App Service

• Retention period. When using an Azure Storage account, you can specify the number of days after
which logs should be automatically deleted. By default, the storage account retains them indefinitely.

Site diagnostics
You can use site diagnostics to record information about HTTP requests and responses, which represent
the communications between the web server hosting the web app and clients accessing the web app. The
following are the site diagnostic settings that you can enable or disable:

• Web server logging. This option controls the standard World Wide Web Consortium (W3C)
extended log for your web app’s server. This type of log shows all requests and responses, client IP
addresses, and timestamps for each event. You can use it to assess server load, identify malicious
attacks, and study client behavior.
• Detailed error messages. In HTTP, any response with a status code of 400 or greater indicates an
error. This log gathers detailed messages representing these errors, which should help you to
diagnose an underlying problem.
• Failed request tracing. This option enables you to trace detailed data when an error occurs. Because
the trace includes a list of all the IIS components that processed the request along with the
corresponding timestamps, you can use this trace to isolate problematic components.

Additional Reading: For more information on diagnostic logging, refer to: “Enable
diagnostics logging for web apps in Azure App Service” at: http://aka.ms/A42xut

Note: To troubleshoot issues with App Service on Linux, you should check Docker logs,
which reside in the LogFiles directory on the VM hosting the web app.

Monitoring web apps


After you enable application and site-diagnostic
logs, you can download the logs to examine their
content. Additionally, you can use the
Monitoring tile in the Azure portal to view a
web app’s performance.

Accessing diagnostic logs


When storing logs in the file system of a web
app’s instances, you can retrieve them by using
FTP. You can find the FTP link in the Essentials
section of each web app’s blade in the Azure
portal. You can use this link in your web browser
or in a dedicated FTP client, such as Core_FTP. To
access the logs, you must authenticate with deployment credentials that you configured for the FTP server
and Git.

The logs are in the following folders:

• Application logs: /LogFiles/Application

• Detailed error logs: /LogFiles/DetailedErrors


• Failed request traces: /LogFiles/W3SVC#########/
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 5-35

• Web Server logs: /LogFiles/http/RawLogs

• Deployment logs: /LogFiles/Git

To examine the failed request traces, ensure that you download both XML and XSL files to the same
location. You can then open the XML files in Microsoft Edge.
Instead of using FTP, you also can download the logs by using the Save-AzureWebsiteLog Windows
PowerShell cmdlet, as follows:

Save-AzureWebsiteLog -Name MyWebapp -Output .\LogFiles.zip

Alternatively, you can use the Azure CLI to download logs:

az webapp log download –name MyWebapp –log-file .\LogFiles.zip –resource-group


MyResourceGroup

If you need to filter or search content of the logs, you should consider using Visual Studio and leverage its
integration with Application Insights. To take advantage of this functionality, install the Application Insight
SDK and add it to your project in Visual Studio. Then add Trace Listener to your project by selecting
Manage NuGet Packages and then Microsoft.ApplicationInsights.TraceListener. Finally, upload the
project to Azure, and then monitor the log data, together with requests, usage, and other statistical
information.

To view log data in near real-time, developers can stream the logs to their client computers by running
the following Azure PowerShell cmdlet:

Get-AzureWebSiteLog -Name webappname -Tail

Alternatively, they can use for this purpose the az webapp log tail Azure CLI command.

Monitoring web apps in the Azure portal


The Azure portal also includes a monitoring pane within the web app blade. The pane consists of
customizable graphs displaying performance counters of web app resources, such as CPU Time and
network traffic. Some of the most interesting counters include:

• CPU Time

• Data In

• Data Out

• HTTP Server Errors

• Requests

• Memory working set

Other metrics that you can add to the graph include:

• Average memory working set


• Average Response Time

• Various HTTP error types

• HTTP successful responses


By displaying these metrics in a graph format, you can quickly determine how demand and the web app
responses have varied over an hour, 24 hours, or seven days.
MCT USE ONLY. STUDENT USE PROHIBITED
5-36 Implementing Azure App Service

You can also configure alerts that are raised when a counter you select reaches a custom threshold that
you specify. You can configure an alert to trigger email notifications to owners, contributors, readers of
the web app, and email addresses that you provide. You also can specify a webhook, which represents an
HTTP or HTTPS endpoint where the alert should be routed. In addition, it is possible to remediate the
issue that is causing an alert. To accomplish this, as part of alert definition, specify a logic app that should
run automatically when an alert is raised and configure the logic app to perform the remediating action.

To add an alert, perform the following steps:

1. In the Azure portal, navigate to the web app that you want to monitor.

2. In the monitoring pane, click any of its graphs.

3. On the Metrics blade, click Add metric alert.

4. On the Add rule blade, in the Name text box, type a unique name.

5. In the Description text box, type a description of the alert.

6. Ensure that Metrics appears in the Alert on drop-down list. Note that you can also generate alerts
based on events.
7. Leave the default entries in the Subscription, Resource group, and Resource drop-down lists.

8. In the Metric drop-down list, select the metric to which you would like to add an alert.

9. In the Condition drop-down list, select a condition, such as Greater than.

10. In the Threshold text box, type the value that should trigger the alert.

11. In the Period drop-down list, select the period during which the value should exceed the threshold.

12. Select Email owners, contributors, and readers.

13. Optionally, specify the email addresses of additional notification recipients.

14. Optionally, in the Webhook text box, type the HTTP/HTTPS endpoint to which you want to route the
alert.
15. If you intend to trigger execution of a logic app in response to the alert, click Run a logic app from
this alert. On the Select a logic app, blade click Enabled, in the Logic app drop-down list, select the
logic app you want to run, and then click OK.

16. Click OK to finish the creation of the alert.

Using Kudu
Project Kudu is an open-source component of
Web Apps that provides several functional
enhancements, such as support for continuous
deployment from Git and Mercurial source-code
control systems. It also includes the code that
implements WebJobs. Kudu offers a user
interface that facilitates access to diagnostics and
troubleshooting tools.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 5-37

Accessing the Kudu user interface


Every web app includes a hidden Kudu site. To access this, add the scm subdomain to the
azurewebsites.net FQDN for your web app. For example, if your web app is accessible via
http://mywebapp.azurewebsites.net, you can access the corresponding Kudu user interface at
https://mywebapp.scm.azurewebsites.net. Alternatively, you can navigate to the same location from the
Advanced Tools section of the web app blade in the Azure portal. Regardless of the method you choose,
you will need to use an account that has administrative privileges to the web app.

The main page of the Kudu interface displays information about the sandbox environment hosting the
web app, including its uptime, site folder, temp folder, and Azure App Service version. By using the
options in the Debug console menu, you can interact with this environment by running Windows
commands or PowerShell cmdlets. In both cases, the interface includes a browser view of the file system
folders available to the web app.

By selecting the Process explorer menu option, you can view the list of all web app processes, including
information such as their memory usage and uptime. For each process, you can find out its dynamic link
library files (.dll files), threads, and environment variables.

Other Kudu interface elements provide access to diagnostics dumps, log stream WebJobs dashboard,
webhoooks, and deployment scripts. There is also the option of adding NuGet extensions to the web app.

Demonstration: Using Kudu to monitor a WebJob


In this demonstration, you will see how to use Kudu to monitor the status of a WebJob.

Question: How can you access the Kudu interface for a web app that you created in Azure?
MCT USE ONLY. STUDENT USE PROHIBITED
5-38 Implementing Azure App Service

Lesson 6
Implementing Traffic Manager
If you deliver web app services to customers spread across multiple locations, you typically need to be
able to run your apps in a load-balanced manner across many datacenters. This allows you to minimize
time it takes for customers to receive responses to their requests by serving these responses from the web
app instance that is closest to origin of these requests. Geographically distributed load balancing also
increases the availability of a web app by facilitating region-level resiliency. You can implement this load
balancing by using Azure Traffic Manager. In this lesson, you will learn how to configure and use Traffic
Manager to improve responsiveness and availability of Azure App Service apps.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe how Traffic Manager distributes requests to multiple App Service apps.
• Explain how to configure Traffic Manager endpoints.

• Describe the best practices for a Traffic Manager configuration.

• Configure Traffic Manager.

Overview of Traffic Manager


When you create an app, you must choose an
Azure region where the app will be hosted. If you
choose a Basic, Standard, Premium, or Premium
V2 tier service plan, you can create multiple
instances of your app to increase capacity and
resilience to failure. These instances will be in the
same Azure region. The Azure load balancer will
automatically distribute the requests targeting
the web app they host.

However, you might want to distribute the load


across web apps that are in different Azure
regions. You can implement this functionality by
using Traffic Manager. Traffic Manager provides load distribution by relying exclusively on DNS name
resolution. Traffic Manager supports any endpoints with DNS names resolvable to public IP addresses,
regardless of their location. Traffic Manager periodically checks all endpoints. If an endpoint fails the
checks, Traffic Manager removes it from the distribution until checks are successful again.

How Traffic Manager works


Through Traffic Manager, a client DNS resolver resolves a FQDN of the target web app to an IP address in
the following way:

1. A user attempts to connect to a specific service by using its FQDN, by typing it into a browser address
bar or by clicking a link, for example. In this example, the user attempts a connection to
www.adatum.com. From the DNS standpoint, this name takes the form of a CNAME record, which
resolves to an A record in the Traffic Manager DNS namespace trafficmanager.net.

2. The DNS server handling the name resolution for the client DNS resolver of the user’s computer
submits a query to the Traffic Manager DNS servers.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 5-39

3. Traffic Manager accepts the query and attempts to find the optimal endpoint, based on its
configuration. It returns the DNS name of one endpoint to the DNS server, which, in turn, forwards it
to the DNS resolver on the user’s computer.

4. The DNS resolver on the user’s computer submits a request to resolve the endpoint’s DNS name to its
IP address.

5. Following successful DNS name resolution, the user connects to the endpoint via its IP address.

Note: You can use Traffic Manager to distribute loads across Azure web apps, Azure mobile
apps, Azure Cloud Services, Azure VMs with public IP addresses, and external endpoints. You can
use it to increase responsiveness and availability for endpoints within and outside of Azure.

How to implement Traffic Manager


Follow these steps to implement Traffic Manager:
1. Deploy endpoints that represent the same content and apps across different Azure regions and,
optionally, to locations outside of Azure.

2. Choose a unique domain prefix for your Traffic Manager profile.

3. Create a Traffic Manager profile with a routing method that is most appropriate for your needs.

4. Add endpoints to the Traffic Manager profile.

5. Configure monitoring for the endpoints which periodically checks whether they are operational.

6. Optionally, create a custom DNS record to point to your Traffic Manager profile.

Traffic Manager supports the following routing methods:

• Performance. Traffic Manager evaluates which application instance is closest to the end user (in terms
of network latency) and provides the corresponding DNS name.
• Failover. Traffic Manager provides the DNS name corresponding to the application instance
designated as the primary, unless that instance does not pass Traffic Manager health checks. In that
case, Traffic Manager returns the DNS name of the next application instance (according to the
prioritized list of instances that you define) to end users.

• Weighted. Traffic Manager provides the DNS names of every application instance (alternating among
them). The distribution pattern depends on the value of the weight parameter that you define. The
volume of traffic requests that Traffic Manager directs to a particular instance is directly proportional
to its weight. You can specify weights between 1 and 1,000. All endpoints have a default weight of 1.

• Geographic. Traffic Manager directs traffic to a specific location based on the geographical area from
which an access request originates. This enables you to provide localized user experience or to restrict
access to specific application instances to comply with data sovereignty rules.

You can configure three types of Traffic Manager endpoints:

• Azure endpoints that represent services hosted in Azure, such as App Service apps, cloud services, or
public IP addresses. Traffic Manager also supports routing to nonproduction slots of App Service
apps.

• External endpoints that identify the services hosted outside of Azure, such as your web app running at
an ISP. This provides a convenient way to maintain continuity of your services in migration scenarios.
MCT USE ONLY. STUDENT USE PROHIBITED
5-40 Implementing Azure App Service

• Nested profiles that you use to implement nested hierarchies of Traffic Manager profiles. You can use
this technique to increase the flexibility of load balancing. For example, you could set up a parent
profile that uses performance load balancing to distribute the load over several endpoints around the
world. Traffic Manager sends client requests to the endpoint that is closest to the user. Within one of
those endpoints, you could use round-robin load balancing in a child profile to distribute the load
equally between two web apps.

Configuring Traffic Manager


Before you can use Traffic Manager to load-
balance traffic to two or more app services apps,
you must create those apps in different Azure
regions and deploy matching content to each. In
most cases, content and configuration should be
identical on every app you use in a Traffic
Manager profile. After you complete the
deployment, perform the following tasks to
configure Traffic Manager:
1. Sign in to the Azure portal.

2. On the hub menu, click + Create a


resource, click Networking, click See all,
click Traffic Manager profile, and then click Create.

3. On the Create Traffic Manager profile blade, in the Name text box, type the unique name in the
trafficmanager.net DNS namespace that will identify the profile.
4. In the Routing method drop-down list, select one of the following entries:

• Performance

• Weighted
• Priority

• Geographic

5. Create a new resource group or use an existing resource group for the Traffic Manager profile.

6. Specify the Azure region where the Traffic Manager profile will be hosted.

7. Click Create.

8. After you have created the Traffic Manager profile, navigate to it in the Azure portal.

9. On the Traffic Manager profile blade, click Endpoints.

10. On the endpoints blade, click Add to add an endpoint to the Traffic Manager profile. Each endpoint
can reside in a different location.

11. On the Traffic Manager profile blade, click Configuration.

12. On the Configuration blade, you can change the routing method, define the Time to Live (TTL)
parameter for the Traffic Manager DNS records, and configure endpoint monitoring. Traffic Manager
polls each endpoint in the profile to confirm that it is online. You can configure monitoring to use
HTTP or HTTPS. To perform more in-depth checks, you should design a custom page that performs
comprehensive health checks and reports the outcome to Traffic Manager. You must ensure that this
page exists for each endpoint in the Traffic Manager profile.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 5-41

You can also use Azure PowerShell to configure Traffic Manager by performing the following steps:

1. Start Azure PowerShell, and then sign in to your subscription:

Login-AzureRmAccount

2. If you have multiple subscriptions, select the one in which you are going to create the Traffic
Manager profile:

Set-AzureRmContext SubscriptionName “Name of your subscription”

3. Create a new resource group:

New-AzureRMResourceGroup –Name AdatumRG –Location centralus

4. Create the Traffic Manager profile with the name Myprofile. Use the Performance routing method
with the DNS name adatum. Provide a TTL value of 30 seconds and HTTP as the monitoring protocol:

$profile = New-AzureRmTrafficManagerProfile –Name MyProfile -ResourceGroupName


AdatumRG -TrafficRoutingMethod Performance -RelativeDnsName adatum -Ttl 30 -
MonitorProtocol HTTP -MonitorPort 80 -MonitorPath "/"

5. Add the first endpoint to the Traffic Manager profile:

$webapp1 = Get-AzureRmWebApp -Name webapp1


Add-AzureRmTrafficManagerEndpointConfig –EndpointName webapp1ep –
TrafficManagerProfile $profile –Type AzureEndpoints -TargetResourceId $webapp1.Id –
EndpointStatus Enabled

6. Add the second endpoint to the Traffic Manager profile:

$webapp2 = Get-AzureRmWebApp -Name webapp1


Add-AzureRmTrafficManagerEndpointConfig –EndpointName webapp2ep –
TrafficManagerProfile $profile –Type AzureEndpoints -TargetResourceId $webapp2.Id –
EndpointStatus Enabled

7. Update the Traffic Manager profile so that the changes take effect:

Set-AzureRmTrafficManagerProfile –TrafficManagerProfile $profile

Enabling and disabling endpoints and profiles


In some scenarios, you might need to temporarily disable individual endpoints or even the entire
Traffic Manager profile. You can use the Enable-AzureRMTrafficManagerProfile or Disable-
AzureRMTrafficManagerProfile command to enable or disable a Traffic Manager profile. For example:

Enable-AzureRmTrafficManagerProfile -Name MyProfile -ResourceGroupName AdarumRG

Disable-AzureRmTrafficManagerProfile -Name MyProfile -ResourceGroupName AdarumRG

To enable or disable a Traffic Manager endpoint, use the Enable-AzureRMTrafficManagerEndpoint and


Disable-AzureRMTrafficManagerEndpoint commands.
MCT USE ONLY. STUDENT USE PROHIBITED
5-42 Implementing Azure App Service

Traffic Manager best practices


Follow these rules and best practices to ensure
the best resilience from Traffic Manager:

• Consider adjusting the DNS TTL value. This


value determines how often DNS servers and
DNS clients keep entries representing
resolved DNS queries in their local cache.
This affects the time it takes for changes in
status of Traffic Manager profile endpoints
to propagate to all DNS servers and DNS
clients.

• Remember that you can add staging slots to


a Traffic Manager profile. This allows you to
implement testing in production.

• Make content of endpoints consistent. If the content and configuration of all endpoints in the Traffic
Manager profile are not identical, the response sent to users might be unpredictable. This rule might
not apply when implementing the Geographic routing method.

• Take advantage of the ability to disable endpoints during web app maintenance. You can perform
maintenance operations on an endpoint, such as updating a deployment, without causing any service
interruptions by redirecting the traffic to other endpoints. To do this, disable the endpoint you want
to maintain before you begin your administrative actions. Traffic Manager will forward all traffic to
other endpoints until you complete the maintenance operation and re-enable this endpoint.

Demonstration: Configuring Traffic Manager


In this demonstration, you will see how to:
• Create a new Traffic Manager profile.

• Add an endpoint to a Traffic Manager profile by using the Azure portal.

• Test Traffic Manager.

Question: How does the load-balancer solution of Traffic Manager differ from similar
solutions that you can implement in Azure?
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 5-43

Lab: Implementing web apps


Scenario
A. Datum Corporation’s public-facing web app currently runs on an IIS web server at the company’s
chosen ISP. A. Datum wants to migrate this web app into Azure. You must test the Web Apps functionality
by setting up a test A. Datum web app. The A. Datum development team has provided you with web app
content to deploy. You must ensure that the team will be able to stage changes to the test web app
before you deploy these changes to the public-facing web app. A. Datum is a global company, so you also
want to test Azure Traffic Manager, and demonstrate how it distributes traffic across multiple instances of
the web app.

Objectives
After completing this lab, you will be able to:

• Create a new web app.

• Deploy a web app.


• Manage web apps.

• Implement Traffic Manager to load-balance web apps.

Note: The lab steps for this course change frequently due to updates to Microsoft Azure.
Microsoft Learning updates the lab steps frequently, so they are not available in this manual. Your
instructor will provide you with the lab documentation.

Lab Setup
Estimated Time: 60 minutes
Virtual machine: 20533E-MIA-CL1

User name: Student

Password: Pa55w.rd
Before you begin this lab, ensure that you have performed the “Preparing the demo and lab environment”
demonstration tasks at the beginning of this module’s first lesson, and that the setup script is complete.

Exercise 1: Creating web apps


Scenario
You must set up a test web app in Azure. As the first step in this process, you want to create a new web
app. Later in this lab, you will deploy this web app to the test web app.

Exercise 2: Deploying a web app


Scenario
Now that you have created a web app in Azure and added a staging slot, you can publish the internally
developed web app that the A. Datum web development team supplied. In this exercise, you will use a
publishing profile in Visual Studio to connect to the new web app and deploy the web content.
MCT USE ONLY. STUDENT USE PROHIBITED
5-44 Implementing Azure App Service

Exercise 3: Managing web apps


Scenario
The web deployment team created an updated style sheet for the A. Datum’s test web app. You must
demonstrate how you can deploy these changes to a staging slot and test them before deploying them to
the production A. Datum web app. In this exercise, you will upload the new web app to the staging slot
that you created in Exercise 1, and you will then swap it into the production slot.

Exercise 4: Implementing Traffic Manager


Scenario
Because the A. Datum web app has clients around the world, you must ensure that it responds rapidly to
requests from different geographic locations. You must evaluate Traffic Manager to see if it can ensure
that users access web content close to their location. You will set up Traffic Manager to serve content
from two different Azure regions.

Question: In the lab, you deployed the A. Datum production website to the production slot
of an Azure web app. You also deployed a new version of the site to a staging slot. Within a
web browser, how can you tell which is the production site and which is the staging site?
Question: At the end of the lab, you used an FQDN within the trafficmanager.net domain
to access your web app. How can you use your own registered domain name to access this
web app?
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 5-45

Module Review and Takeaways


Review Question

Question: What are the advantages of deploying a web app to Web Apps versus deploying
a web app to an Azure VM?
MCT USE ONLY. STUDENT USE PROHIBITED
MCT USE ONLY. STUDENT USE PROHIBITED
6-1

Module 6
Planning and implementing Azure Storage
Contents:
Module Overview 6-1
Lesson 1: Planning storage 6-2

Lesson 2: Implementing and managing Azure Storage 6-13

Lesson 3: Exploring Azure hybrid storage solutions 6-27


Lesson 4: Implementing Azure CDNs 6-33

Lab: Planning and implementing Azure Storage 6-39

Module Review and Takeaways 6-41

Module Overview
Microsoft Azure Storage services provide a range of options for provisioning and managing storage. The
services offer four core storage types: blobs, tables, queues, and files. Azure Content Delivery Network
(CDN) is a supplementary storage-related service whose primary goal is to improve the performance of
web applications and services by hosting data in locations that are close to consumers.

IT professionals can provision and manage Azure Storage services by using a variety of tools and
interfaces. These include the Azure portal, Azure PowerShell, Azure Command-Line Interface (Azure CLI),
and open source and non-Microsoft command-line and graphical utilities. In this module, you will learn
about the available data storage options and their management.

Objectives
After completing this module, you will be able to:
• Choose appropriate Azure Storage options to address business needs.

• Implement and manage Azure Storage.

• Describe Azure hybrid storage solutions.


• Implement Azure CDNs.
MCT USE ONLY. STUDENT USE PROHIBITED
6-2 Planning and implementing Azure Storage

Lesson 1
Planning storage
With several different available storage options, it is important to understand not only how to implement
them, but also how to identify the one that is most appropriate for your storage needs. Because storage is
a billable service, you should be aware of its cost implications, so you can deploy the most cost-efficient
solutions. This lesson discusses the various data services that are available in Azure, and it outlines factors
to consider when choosing between them.

Lesson Objectives
After completing this lesson, you will be able to:

• Explain the role of Azure Storage in implementing Azure Infrastructure as a Service (IaaS) solutions.

• Explain the different types of services that Azure Storage provides.

• Plan provisioning of Azure Storage standard-tier services.


• Plan provisioning of Azure Storage premium tier services.

• Identify the pricing implications of using different types of Azure Storage services.

Demonstration: Preparing the lab environment


Perform the tasks in this demonstration to prepare the lab environment. The environment will be
configured while you progress through this module, learning about the Azure services that you will use in
the lab.

Important: The scripts used in this course might delete objects that you have in your
subscription. Therefore, you should complete this course by using a new Azure subscription. In
addition, consider using a new Microsoft account that has not been associated with any other
Azure subscription. This will eliminate the possibility of any potential confusion when running
setup scripts.

This course relies on custom Azure PowerShell modules including Add-20533EEnvironment to prepare
the lab environment for demos and labs, and Remove-20533EEnvironment to perform clean-up tasks at
the end of the module.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 6-3

Role of Azure Storage in implementing Azure infrastructure solutions


Azure Storage is part of Azure data management
services. Several Azure services use Azure Storage,
including Azure VMs, Azure Backup, and Azure
Site Recovery. Other modules of this course cover
these Azure services in detail.
Azure App Service, Azure Cloud Services, and web
applications running on Azure VMs can benefit
from CDN, which provides globally distributed
storage for their content. This improves the
customer experience when accessing these
services from remote locations by minimizing the
time it takes to download that content.

Overview of Azure Storage services


Azure Storage is a service that you can use to store
unstructured and partially structured data.
Developers and cloud architects commonly choose
it to host data that App Service or Azure Cloud
Services use. IT professionals who deploy Azure
virtual machines rely on Azure Storage for storing
virtual machine operating system and data disks,
and for hosting network file share contents.

Azure Storage offers four types of storage services,


which correspond to the types of data that they
are designed to store:

• Blobs. These typically represent unstructured


files such as media content, virtual machine disks, backups, or logs. Blobs offer a locking mechanism
which facilitates exclusive file access that IaaS virtual machines require. There are three types of blobs.
The first one, known as a block blob, is optimized for sequential access, which is ideal for media
content. The second one, referred to as a page blob, offers superior random access capabilities, which
is best suited for virtual machine disks. The third one, referred to as an append blob, supports data
append operations, without the need to modify existing content. This works best with logging and
auditing activities.

• Tables. These host non-relational and partially structured content, which consists of multiple rows of
data with different sets of properties. In the context of Azure Table storage, these rows are referred to
as entities. Developers frequently implement table storage as the backend data store for App Service
or Cloud Services.

• Queues. These are temporary storage for messages that Azure services commonly use to
asynchronously communicate with each other. In particular, in distributed applications, a source
component sends a message by placing it in a queue. The destination component works though the
messages in the queue one at a time.
• Files. Similar to blobs, these provide storage for unstructured files, but they offer support for file
sharing in the same manner as traditional on-premises Windows file shares.
MCT USE ONLY. STUDENT USE PROHIBITED
6-4 Planning and implementing Azure Storage

There are two tiers of page blob storage: Standard and Premium. Premium storage offers superior
performance, equivalent to what the solid-state drive (SSD) technology provides. A standard storage
account provides performance similar to commodity magnetic disks.

Storage accounts
To use Azure Storage, you first need to create a storage account. Premium storage accounts are strictly for
page blob storage.

By default, you can create up to 200 storage accounts in a single Azure subscription; however, you can
increase this limit to 250 by opening a service ticket with Azure support. Each standard, general-purpose
storage account is capable of hosting up to 500 terabytes (TB) of data, while the maximum size of a
premium storage account is 35 TB. For each storage account, you must specify:
• Name. This defines the unique URL that other services and applications use to access a storage
account’s content. All such URLs include the “core.windows.net” domain suffix. The fully qualified
domain name (FQDN) depends on the type of storage that you want to use. For example, if you
designate the “mystorageaccount” storage account name, you can access its blob service via
http://mystorageaccount.blob.core.windows.net.

• Deployment model. You have the choice between Azure Resource Manager and classic. As mentioned
earlier, this affects the functionality that the storage account will support. For example, classic storage
accounts do not support some of the more recently introduced features, such as Azure Storage
Service Encryption for data at rest or access tiers.
• Kind. This determines the type of content that you will be able to store in the storage account, in
addition to support for access tiers. More specifically, Azure Storage supports three kinds of accounts:

o Blob. Offers optimized support for block and append blobs, but does not support other types of
storage options, including page blobs. The optimization relies on the ability to set the access tier
of the storage account. The choice of access tier, which can be hot, cool, or archive, affects the
way the storage-related charges are calculated and, in the case of the archive tier, the time it
takes to retrieve blobs. By choosing a specific access tier, you can minimize the corresponding
cost of storage based on its usage patterns. More specifically, for the hot access tier, the price per
gigabyte (GB) is higher but charges associated with the number of storage transactions are lower.
In addition, you do not pay for the amount of data that you write to or read from the storage
account. For the cool access tier, the price per GB is more than 50 percent lower, but
transactional charges are higher and you do pay for the data that you write to or read from a
storage account. The archive tier has the lowest price per GB, but is subject to the highest
transactional and access charges. The latency in data retrieval time in this case can be significant,
reaching up to 15 hours in extreme cases.

You can configure the access tier at the storage account level by setting its access tier attribute to
either hot or cool. As a result, any blobs residing in this account will automatically inherit that
access tier setting. Additionally, you can set the hot, cool, or archive attribute explicitly for
individual blobs. That setting takes precedence over the storage level configuration. To retrieve a
blob with the archive access tier, you must change its attribute to either cool or hot.
To switch between access tiers, you can modify either the blob-level attribute or the storage
account–level attribute. The latter will affect the access tier of all blobs for which the access tier
attribute is not explicitly set. It is important to note that these operations might have cost
implications.

Note: Archive access tier and blob-level tiers are available only for block blobs.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 6-5

Note: The maximum capacity of a blob storage account is 5 petabytes (PBs), which is 10
times larger than general-purpose storage accounts. This increased capacity corresponds to
increased performance in terms of ingress and egress throughput (up to 50 Gbps) and the
maximum number of I/O operations per second (IOPS) (up to 50,000). If you need a similar
increase for a general-purpose storage account, you can submit a request to Azure Support.

o General purpose v1. Provides the ability to host blobs, tables, queues, and files, but without
support for newer features, such as access tiers.

o General purpose v2. Provides the ability to host blobs, tables, queues, and files. They also include
support for newer features, such as access tiers. You can convert general purpose v1 accounts to
general purpose v2 accounts; however, keep in mind that such a conversion is not reversible.

• Performance. This determines performance characteristics of the provisioned storage and directly
impacts the storage service that the account supports. You can choose between Standard and
Premium performance. A Premium performance storage account provides I/O throughput and
latency characteristics equivalent to those delivered by SSDs, but its usage is limited to page blobs.
Effectively, its main purpose is to host virtual disk files of Azure VMs that require superior I/O
performance, typical for enterprise-level workloads. A Standard performance storage account can
host any type of content (blobs, tables, queues, and files), including virtual disk files of Azure VMs. In
this case, though, the resulting virtual disk throughput and latency characteristics are equivalent to
those delivered by commodity hard disk drives (HDDs). You can choose premium performance when
creating general purpose v1 and general purpose v2 storage accounts. Note that the resulting
storage account supports page blobs only. In addition, keep in mind that you cannot change the
performance tier of an existing storage account.

• The replication settings. To ensure resiliency and availability, Azure automatically replicates your data
across multiple physical servers functioning as storage nodes. The number of replicas and the scope
of replication depend on your choice of replication scheme. You can choose from four replication
schemes:
o Locally redundant. Your data replicates synchronously across three copies within a cluster of
storage nodes referred to a storage scale unit. A single storage scale unit contains multiple
physical racks of storage nodes. Each copy of a single storage account resides in a different
physical rack within a separate fault domain and upgrade domain. This provides resiliency and
availability equivalent to that of compute nodes.

Note: For more information regarding fault and upgrade domains, refer to Module 3 of
this course, “Implementing virtual machines.”

Locally redundant storage (LRS) protects your data against server hardware failures but not
against a failure that affects the entire Azure region. This is the only option available for premium
storage accounts.
o Zone-redundant. Your data replicates across separate datacenters in one or more Azure regions.
Zone-redundant storage (ZRS) offers more durability than LRS. However, ZRS-based storage
accounts do not support Azure VM disk files. At the time of authoring this content, there are two
types of ZRS-replication schemes. The ZRS classic scheme is available when using general purpose
V1 storage accounts. In this case, data replicates asynchronously across multiple datacenters in
one or more Azure regions. The corresponding storage account supports only block blobs. The
more recent ZRS option, implements synchronous replication across availability zones in the same
Azure region. The corresponding storage account supports, in addition to block blobs, tables,
files, queues, and page blobs, as long as they do not represent Azure VM disks.
MCT USE ONLY. STUDENT USE PROHIBITED
6-6 Planning and implementing Azure Storage

o Geo-redundant. Your data replicates asynchronously from the primary region to a secondary
region. Predefined pairing between the two regions ensures that data stays within the same
geographical area. Data also replicates synchronously across three replicas in each of the regions,
resulting in six copies of storage account content. If failure occurs in the primary region and
Microsoft initiates a failover to the secondary region, the content of the Azure Storage account
becomes available in the secondary location. Effectively, geo-redundant storage (GRS) offers
improved durability over LRS and ZRS.

o Read-access geo-redundant. As with GRS, your data replicates asynchronously across two regions
and synchronously within each region, yielding six copies of a storage account. However, with
read-access geographically redundant storage (RA-GRS), the storage account in the secondary
region is available for read-only access regardless of the primary’s status. This allows you to
perform near real-time data analysis and reporting tasks without affecting your production
workload performance.

Additional Reading: The Azure platform determines the location of the secondary region
automatically, based on the concept of Azure region pairing. For a list of secondary regions for
each of the Azure regions, refer to: “Azure Storage replication” at: https://aka.ms/r3h0wc

• Secure transfer required. Azure Storage supports both secure and nonsecure connections. You can
enforce secure connections by enabling this setting. This will result in rejection of any access requests
that do not apply encryption at the protocol level, such as HTTP or Server Message Block (SMB) 2.1.

Note: Storage accounts are encrypted by default, which provides protection of their
content at rest. Azure Storage services automatically encrypt any data during storage account
write operations and decrypt it during read operations. Microsoft manages the encryption keys.

• Location. This designates the Azure datacenter where the primary instance of your storage account
resides. In general, you should choose a region that is close to users, applications, or services that
consume the storage account’s content.
• Virtual networks. The virtual network service endpoints functionality allows you to grant exclusive
access to the storage account from designated subnets of a designated virtual network and
simultaneously prevent connectivity from the internet. As part of the service endpoints configuration
of Azure Storage accounts, you can also allow connections which originate from on-premises
locations and are routed via Azure ExpressRoute. To accomplish this, when configuring service
endpoints, provide on-premises network address translation (NAT) IP addresses used for ExpressRoute
public peering. Note that, at the time of authoring this course, the virtual network service endpoints
functionality is in preview.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 6-7

Planning for Azure Storage standard services


If you use Azure Storage to host information for a
custom solution, such as a mobile app or a web
app, cloud architects or developers must select the
appropriate storage type for each functional
requirement. To assist with this process, you
should understand the characteristics of each
storage type.

Blob storage
The Azure blob storage service stores large
amounts of unstructured data in the form of blobs.
Within a storage account, blobs reside in
containers. Containers are similar to file folders,
helping you to organize your data and providing extra security. However, unlike file folders, containers
support single-level hierarchy only. Each blob is identified by a unique URL. For example, if you created a
blob named “myblob.jpg” in a container named “mycontainer” in a storage account named “myaccount,”
then its unique URL would be http://myaccount.blob.core.windows.net/mycontainer/myblob.jpg.

When you create a blob, you designate its type either implicitly or explicitly. It is not possible to change
an existing blob’s type. The three types of blobs are:

• Block blobs. Block blobs are optimized for uploads and downloads. To accomplish this optimization,
Azure divides data into smaller blocks of up to 100 megabytes (MB) in size, which subsequently
upload or download in parallel. Individual block blobs can be up to 4.75 TB in size.
• Page blobs. Page blobs are optimized for random read and write operations. Blobs are accessed as
pages, each of which is up to 512 bytes in size. When you create a page blob, you specify the
maximum size to which it might grow, up to the limit of 8 TB. Each standard storage account page
blob offers throughput of up to 60 MB per second or 500 (8 KB in size) IOPS.

Note: At the time of authoring this course, the maximum size of a virtual disk file is 4 TB.

Note: For scalability and resiliency considerations when using Azure Storage page blobs for
Azure VM unmanaged disks, refer to Module 3 of this course.

• Append blobs. Append blobs are strictly for append operations because they do not support
modifications to their existing content. Appending takes place in up to 4 MB blocks—the same size as
the individual blocks of block blobs—with up to 50,000 blocks per append blob, which translates
roughly into 195 GB.

Note: Generally, the Azure platform assigns the appropriate blob type automatically, based
on the intended purpose. For example, when you create an Azure VM from the Azure portal, the
platform will automatically create a container in the target storage account and a page blob
containing the virtual machine disk files.

Table storage
You can use the Azure Table storage service to store partially structured data in tables without the
constraints of traditional relational databases. Within each storage account, you can create multiple tables,
and each table can contain multiple entities. Because table storage does not mandate a schema, the
MCT USE ONLY. STUDENT USE PROHIBITED
6-8 Planning and implementing Azure Storage

entities in a single table do not need to have the same set of properties. For example, one Product entity
might have a Size property, while another Product entity in the same table might have no Size property at
all. Each property consists of a name and a value. For example, the Size property might have the value 50
for a particular product.
Similar to blobs, applications can access each table through a URL. For example, to access a table named
“mytable” in a storage account named “myaccount,” applications would use the following URL:
http://myaccount.table.core.windows.net/mytable URL.
The number of tables in a storage account is limited only by the maximum storage account size. Similarly,
besides the limit on the size of the storage account, there are no restrictions on the maximum number of
entities in a table. Each entity can be up to 1 MB in size and possess up to 252 custom properties. Every
entity also has three designated properties: a partition key, a row key, and a timestamp. The platform
generates the timestamp value automatically, but the table designer chooses the partition key and row
key.

It is important to choose these two properties carefully because Azure uses their combination to create a
clustered index for the table. The clustered index can considerably improve the speed of table searches,
which otherwise would result in a full table scan. You can use the partition key to group similar entities
based on their common characteristic, but with unique row key values. Proper selection of the partition
key can also improve performance when adding entities to a table, by making it possible to insert them in
batches.

Queue storage
The Azure Queue storage service provides temporary messaging store. Developers frequently use queues
to facilitate reliable exchange of messages between individual components of multitier or distributed
systems. These components add and remove messages from a queue by issuing commands over the HTTP
or HTTPS protocols.

Similar to other Azure storage service types, each queue is accessible from a URL. For example, to access a
queue named “myqueue” in a storage account named “myaccount,” applications would use the following
URL: http://myaccount.queue.core.windows.net/myqueue.

You can create any number of queues in a storage account and any number of messages in each queue
up to the 500 TB limit for all the data in the storage account. Each message can be up to 64 kilobytes (KB)
in size.

Another frequently used Azure service that offers message storage functionality is Service Bus. However,
Service Bus queues differ from Azure Storage queues in many aspects.

Additional Reading: For more information, refer to: “Azure Queues and Service Bus
queues - compared and contrasted” at: http://aka.ms/Ve4qo0

File storage
The Azure File storage service allows you to create SMB file shares in Azure just as you would with an on-
premises file server. Within each file share, you can create multiple levels of folders to categorize content.
Each directory can contain multiple files and folders. Files can be up to 1 TB in size. The maximum size of
a file share is 5 TB.

The Azure File storage service is available via both SMB 2.1 and SMB 3.x protocols. Starting with Windows
8 and Windows Server 2012, the operating system includes SMB 3.x. Linux distributions also provide
support for SMB 3.x by using the cifs-utils package from the Samba project.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 6-9

The Windows server and client-based version of SMB 3.X offers several advantages over SMB 2.1,
including built-in encryption. As the result, you can establish mapping to Azure File storage shares from
locations outside the Azure region where the Azure Storage account that is hosting the shares resides.
This includes other Azure regions and your on-premises environment, as long as you allow outbound
traffic on TCP port 445. With SMB 2.1, mappings to file shares are available only from the same Azure
region.

Note: At the time of authoring of this course, the SMB 3.x version in the cifs-utils package
in the Samba project does not support encryption.

Azure storage partitioning


When designing Azure Storage–based solutions, you should keep in mind that the recommended
approach for load balancing and scaling them out involves partitioning. In this context, a partition
represents a unit of storage that can be updated in an atomic manner as a single transaction.

Each storage service type has its own partitioning mechanism. In the case of blob storage, each blob
represents a separate partition. With table storage, the partition encompasses all entities with the same
partition key. Queue storage designates each queue as a distinct partition. File storage uses individual
shares for this purpose.

Additional Reading: For more information about Azure Storage partitions, refer to: “Azure
Storage Scalability and Performance Targets” at: http://aka.ms/E73svf

Planning for Azure Storage premium tier services


While it is possible to aggregate the throughput of
Azure-hosted virtual disks with standard storage
accounts by creating multi-disk volumes, this
approach might not be sufficient to satisfy the I/O
needs of the most demanding Azure VM
workloads. To account for these needs, Microsoft
offers a high performance storage service known
as premium storage.

Virtual machines that use premium storage are


capable of delivering throughput exceeding
100,000 IOPS by combining the benefits of two
separate components. The first component is the
storage account with the premium performance tier, where Azure VM disk files reside. The second one,
known as Blobcache, is part of the virtual machine configuration, available on any VM size that supports
premium storage. Blobcache is a relatively complex caching mechanism, which benefits from SSD storage
on the Hyper-V host where the Azure VM is running.

Note: For more information about Azure VM sizes, refer to Module 3 in this course.

There are separate limits applicable to the volume of I/O transfers between a virtual machine and a
premium storage account, and between a virtual machine and a local cache. As a result, the effective
throughput limit of a virtual machine is determined by combining the two limits. In case of the largest
virtual machine sizes, this cumulative limit exceeds 100,000 IOPS (with the 256 KB size of a single IOP), or
MCT USE ONLY. STUDENT USE PROHIBITED
6-10 Planning and implementing Azure Storage

1 GB per second, whichever is lower. Keep in mind that the ability to benefit from caching is highly
dependent on I/O usage patterns. For example, read caching would yield no advantages on disks that
Microsoft SQL Server transaction logs use, but it would likely provide some improvement for disks that
SQL Server database files use.
However, virtual machine I/O throughput is only the first of two factors that determine the overall
maximum I/O throughput. The throughput of virtual machine disks also affects effective throughput. In
the case of premium storage, this throughput depends on the disk size, and it is assigned one of the
following performance levels:

• P4. Disk sizes of up to 32 GB, offering 120 IOPS or 25 MB per second.

• P6. Disk sizes of up to 64 GB, offering 2400 IOPS or 50 MB per second.

• P10. Disk sizes of up to 128 GB, offering 500 IOPS or 100 MB per second.

• P20. Disk sizes of up to 512 GB, offering 2,300 IOPS or 150 MB per second.

• P30. Disk sizes of up to 1 TB, offering 5,000 IOPS or 200 MB per second.

• P40. Disk sizes of up to 2 TB, offering 7,500 IOPS or 250 MB per second.
• P50. Disk sizes of up to 4 TB, offering 7,500 IOPS or 250 MB per second.

Azure Storage pricing


The cost associated with Azure storage depends
on a number of factors, including:

• Storage account kind. The choice between the


general purpose v1, general purpose v2, and
blob storage accounts has several
implications, as described below.

• Storage account performance level. The


choice between the Standard and Premium
performance levels also significantly affects
the pricing model, as described below.

• Access tier. This applies to blob and general


purpose v2 storage accounts, which allow you to choose between hot, cool, and archive access tiers.
This, in turn, affects charges associated with such storage-related characteristics such as space in use,
volume of storage transactions, or volume of data reads and writes.
• Replication settings. LRS storage accounts are cheaper than ZRS accounts, which are cheaper than
GRS accounts; RA-GRS storage accounts are the most expensive.

Note: The Premium performance level implies the use of LRS, because premium storage
accounts do not support zone and geo-replication.

• Volume of storage transactions (for blob, general purpose v1, and general purpose v2 storage
accounts with the standard performance level). A transaction represents an individual operation (an
individual representational state transfer application programming interface [REST API] call) targeting
a storage account. Pricing is provided in a currency amount per 10,000 transactions. In case of
premium performance level storage accounts, there are no transaction-related charges.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 6-11

• Volume of egress traffic (out of the Azure region where the storage account resides). Inbound data
transfers to Azure are free, and outbound data transfers from Azure datacenters are free for the first 5
GB per month. Banded pricing applies above this level. Effectively, when services or applications co-
locate with their storage within the same region, Azure does not impose charges for bandwidth usage
between compute and storage resources. Data transfers incur extra cost with compute and storage
spanning regions or with compute residing in an on-premises environment.

• Amount of storage space in use (for blob, general purpose v1, and general purpose v2 storage
accounts with standard performance level). Charges are on a per-GB basis. In the case of page blobs,
for example, this means that if you create a new 100-GB virtual hard disk file but use only 10 GB of its
total volume, you are charged for that amount regardless of how much space was provisioned. Note
that this rule does not apply to scenarios that involve managed disks, where the storage cost
represents the nominal size of the disks, regardless of the amount of space in use.

• Amount of storage space provisioned (for general purpose v1 and v2 storage accounts with premium
performance tier and managed disks). You calculate Azure Premium Storage pricing based on the size
of the disks that you provision.

Note: If you implement managed disks, pricing also depends on the size of the disks you
provision rather than the amount of disk space in use, even when using the Standard
performance level.

• Volume of data reads and writes (for blobs residing in blob and general purpose v2 storage accounts
with cool and archive access tier).

Note: Changing the storage tier involves reading and writing data, so it is subject to a one-
time charge that reflects the current and target tiers. For example, changing the access tier from
hot to cool for a general purpose v2 storage account results in charges representing write
operations for all blobs without an access tier attribute set. There is no cost for this type of
change when using a blob storage account. Changing the access tier from cool to hot for both
blob and general purpose v2 storage accounts results in charges representing read operations for
all blobs without an access tier attribute set.

• Type of storage (for general purpose v1 and v2 storage accounts). Pricing varies depending on
whether you use a storage account to host page blobs, block blobs, tables, queues, or files.

Additional Reading: For more information, refer to: “Azure Blobs Storage Pricing” at:
http://aka.ms/Mzo4x7

• Early deletion for blobs in the cool or archive tier residing in general purpose v2 storage accounts.
There is a charge associated with any blob that remains in the cool or archive tier for a period shorter
than the predefined limit. This limit is 30 days and 180 days for the cool and archive tiers, respectively.
The cost is prorated based on the number of days remaining to reach the predefined limit.
MCT USE ONLY. STUDENT USE PROHIBITED
6-12 Planning and implementing Azure Storage

Azure Premium Storage pricing


Azure Premium Storage pricing is calculated based on the size of the disks that you provision, rounded up
to the nearest performance level.

Note: In this case, there are no transaction-related charges. Additionally, no extra costs are
associated with geographic replication, because premium storage accounts only support LRS. The
pricing of managed and non-managed premium storage disks of the matching size is the same.

Check Your Knowledge


Question

What is the maximum capacity of a blob storage account?

Select the correct answer.

4.75 TB

8 TB

35 TB

500 TB

5 PB
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 6-13

Lesson 2
Implementing and managing Azure Storage
In this lesson, you will see how to implement the most common storage options in Azure. You will also get
familiar with the tools and utilities that are available to manage Azure Storage.

Lesson Objectives
After completing this lesson, you will be able to:

• Explain how to use the most common Azure Storage tools.

• Explain how to create a storage account.

• Explain how to manage Azure blob storage.


• Explain how to manage Azure file storage.

• Explain how to implement Azure table and queue storage.

• Explain how to control access to storage.

• Explain how to configure Azure Storage monitoring.

• Implement Azure Storage.

Azure Storage tools


Microsoft designed Azure Storage services to
support custom applications and solutions.
Frequently, storage access operations occur via
programmatic methods invoked from custom
code. These methods might use the Azure SDK
libraries or the representational state transfer
(REST) interfaces that developers communicate
with via HTTP and HTTPS-based requests.
However, several tools allow you to examine and
manage content of Azure storage accounts
without resorting to writing custom code.
Examples of such tools include Windows
PowerShell cmdlets, Azure CLI commands, the AzCopy.exe command-line tool, the Azure Storage Explorer
app, and Microsoft Visual Studio.

Azure PowerShell storage cmdlets


You can perform several Azure Storage management tasks by using Azure PowerShell cmdlets. For
example, these cmdlets allow you to explore the content of an Azure storage account:

• Get-AzureStorageBlob. Lists the blobs in a specified container and storage account.

• Get-AzureStorageBlobContent. Downloads a specified storage blob.

• Get-AzureStorageContainer. Lists the containers in a specified storage account.

• Get-AzureStorageShare. Lists the file shares in a storage account.

• Get-AzureStorageFile. Lists the files and directories in a specified storage account.

• Get-AzureStorageFileContent. Downloads a specified file from Azure file storage.


MCT USE ONLY. STUDENT USE PROHIBITED
6-14 Planning and implementing Azure Storage

• Get-AzureStorageQueue. Lists the queues in a storage account.

• Get-AzureStorageTable. Lists the tables in a storage account.

Azure CLI storage commands


Azure CLI offers the same features as Azure PowerShell for managing Azure Storage. You can use the
following commands in Azure CLI to perform the same tasks accomplished by using the Azure PowerShell
commands listed above:

• az storage blob list. Lists the blobs in a specified container and storage account.

• az storage blob download. Downloads a specified storage blob.

• az storage container list. Lists the containers in a specified storage account.

• az storage share list. Lists the file shares in a storage account.

• az storage file list. Lists the files and directories in a specified storage account.

• az storage file download. Downloads a specified file from Azure file storage.

• az storage queue list. Lists the queues in a storage account.

• az storage table list. Lists the tables in a storage account.

Note: You can also perform the operations listed above directly from the Azure portal.

AzCopy.exe
AzCopy.exe is a command-line tool available for Windows and Linux operating systems. It optimizes data
transfer operations within the same storage account, between storage accounts, and between on-
premises locations and Azure Storage.

Additional Reading: For a detailed description of AzCopy.exe, including its command-line


switches and example commands, refer to: “Transfer data with the AzCopy Command-Line tool”
at: http://aka.ms/dc878m

Storage Explorer
Storage Explorer is an app available for Windows, Linux, and Mac OS, which provides a graphical interface
for managing several advanced operations on Azure Storage blobs, tables, queues, and files. At the time
of authoring this course, Storage Explorer 0.95 is the most recent version of Storage Explorer.

Additional Reading: To download Storage Explorer, refer to: https://aka.ms/dgfs2c

Visual Studio
Starting with Azure SDK 2.7, you can use Server Explorer and Cloud Explorer from within the Visual Studio
interface to access Azure storage accounts and to manage their content. Both tools allow you to create
storage accounts and manage individual storage services.

Additional Reading: For more information about using Cloud Explorer, refer to: “Manage
the resources associated with your Azure accounts in Visual Studio Cloud Explorer” at:
https://aka.ms/rxh4s5
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 6-15

Creating an Azure Storage account


You can create a storage account by using the
Azure portal, the New-AzureRmStorageAccount
Azure PowerShell cmdlet, or the az storage
account create Azure CLI command. A storage
account name must be globally unique, contain
between three and 24 characters, and include only
lowercase letters and digits.

When you create a general purpose storage


account, Azure generates the following endpoints
for access to four respective storage types:

• https://account_name.blob.core.windows.net/

• https://account_name.table.core.windows.net/
• https://account_name.queue.core.windows.net/

• https://account_name.file.core.windows.net/

To create a storage account on the Azure portal, follow these steps:

1. On the Azure portal, on the Hub menu, click +Create a resource, and then click Storage.

2. On the storage account blade, click Storage account – blob, file, table, queue.
3. On the Create storage account blade, type a unique Name within the core.windows.net domain. If
the name that you choose is unique, a green check mark appears.

4. Click Resource manager or Classic depending on the type of deployment model you want to use.

5. In the Account kind drop-down list, select Storage (general purpose v1), StorageV2 (general
purpose v2), or Blob storage.

6. If you selected either of the two general purpose storage account types, choose storage performance
by clicking either Premium or Standard.
7. If you selected either of the two general purpose storage account types and Standard performance,
in the Replication drop-down list, select Locally-redundant storage (LRS), Geo-redundant
storage (GRS), Read-access geo-redundant storage (RA-GRS), or Zone-redundant storage
(ZRS).

8. Specify whether you want to require secure transfer by clicking Disabled or Enabled.

9. Choose a target subscription or accept the default selection.


10. Select an existing resource group or create a new one.

11. In the Location drop-down list, click an Azure region where the storage account will be created.
12. Specify whether to configure Azure Storage firewall by granting exclusive access to traffic originating
from designated subnets of virtual networks that you specify. This capability relies on the Service
Endpoints for Azure Storage functionality of Azure virtual networks, described in more detail in
module 2 of this course.
13. Select or clear the Pin to dashboard check box.

14. Click Create.


MCT USE ONLY. STUDENT USE PROHIBITED
6-16 Planning and implementing Azure Storage

Note: The availability of some of these options depends on the deployment model,
account kind, and performance that you choose. For example, as mentioned earlier, premium
storage accounts support only locally redundant replication.

In Azure PowerShell, you can create a new Azure Resource Manager storage account by issuing the
following command:

Creating a new Azure Resource Manager storage account in Azure PowerShell


New-AzureRmStorageAccount –ResourceGroupName ‘MyResourceGroup’ -AccountName
mystorageaccount –Location ‘Central US’ –Type ‘Standard_GRS’

In Azure CLI, you can create a new Azure Resource Manager storage account by using the following
command:

Creating a new Azure Resource Manager storage account in Azure CLI


az storage account create –-resource-group MyResourceGroup --name mystorageaccount –
location centralus –-sku Standard_GRS

During creation of a storage account, Azure automatically generates two account access keys. For a
general-purpose storage account, Azure also generates four endpoints, one for each storage services type.

Managing Azure blob storage


You can store blobs directly in the root container
of the storage account or create custom
containers in which to store blobs. You can create
blob containers by using any of the tools that this
lesson previously described.

Creating blob containers


When you create a container, you must give it a
name and choose the level of anonymous access
that you want to allow from the following options:
• Private. This is the default option. The
container does not allow anonymous access.
This lesson later reviews the available authentication methods.

• Public Blob. This option allows anonymous access to each blob within the container; however, it
prevents browsing the content of the container. In other words, it is necessary to know the full path to
the target blob to access it.

• Public Container. This option allows anonymous access to each blob within the container, with the
ability to browse the container’s content.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 6-17

Use either of the following methods to create a new container. Before you can create the container, you
must obtain a storage context object by passing the storage account’s primary key:

Creating a blob container in Azure PowerShell


$storageKey = (Get-AzureRmStorageAccountKey –ResourceGroup ‘myResourceGroup’ -
StorageAccountName ‘mystorageaccount).Value[0]
$storeContext = New-AzureStorageContext -StorageAccountName ‘mystorageaccount’ -
StorageAccountKey $storeKey
$container = New-AzureStorageContainer –Name ‘mycontainer’ -Permission Container -Context
$storeContext

Creating a blob container in Azure CLI


az storage account keys list –account-name mystorageaccount –resource-group
myResoureGroup
export AZURE_STORAGE_ACCOUNT=mystorageaccount
export AZURE_STORAGE_ACCESS_KEY=<storage_account_key>
az storage container create –name mycontainer –public-access container

Administrators can view and modify containers, in addition to uploading and copying blobs by using tools
such as AzCopy and Storage Explorer. They can also use the following Azure PowerShell cmdlets:

• Remove-AzureStorageBlob. Remove the specified storage blob.

• Set-AzureStorageBlobContent. Upload a local file to the blob container.


• Start-AzureStorageBlobCopy. Copy to a blob.

• Stop-AzureStorageBlobCopy. Stop copying to a blob.

• Get-AzureStorageBlobCopyState. Get the copy state of a specified storage blob.


You can perform the same tasks by using the following Azure CLI commands:

• az storage blob delete. Remove the specified storage blob.

• az storage blob upload. Upload a local file to the blob container.


• az storage blob copy start. Copy to a blob.

• az storage blob copy stop. Stop copying to a blob.

• az storage blob show. Get the copy state of a specified storage blob.

Managing Azure file storage


You use Azure Files to create file shares in an
Azure storage account that are accessible through
the SMB 2.1 or SMB 3.x protocol. Because you can
access on-premises file servers by using the same
protocols, Azure file shares can be particularly
helpful when you migrate on-premises
applications to Azure. If these applications store
configuration or data files on SMB shares,
migration typically will not require any changes to
the application code.
MCT USE ONLY. STUDENT USE PROHIBITED
6-18 Planning and implementing Azure Storage

Creating file shares


Within a storage account, you can create multiple file shares. To create a file share, you can use the Azure
portal, Azure PowerShell, Azure CLI, the REST API, or the storage access tools that this lesson described
earlier. You can create a folder hierarchy to organize the content of each share. You can manage folders
by using the same Windows tools that apply to on-premises environments, including File Explorer or the
command prompt.

Use the following commands to create a file share, to create a folder, and to upload a file:

Managing an Azure file share by using Azure PowerShell


$storageAccount = ‘mystorageaccount’
$storageKey = (Get-AzureRmStorageAccountKey –ResourceGroup ‘myResourceGroup’ -
StorageAccountName $storageAccount).Value[0]
$context = New-AzureStorageContext -StorageAccountName $storageAccount -StorageAccountKey
$storageKey

#Create the new share


$share = New-AzureStorageShare -Name ‘myshare’ -Context $context

#Create a directory in the new share


New-AzureStorageDirectory -Share $share -Path ‘mydirectory’

#Upload a file
Set-AzureStorageFileContent -Share $share -Source ‘.\instructions.txt’ -Path
‘mydirectory’

Managing an Azure file share by using Azure CLI


az storage account keys list –account-name mystorageaccount –resource-group
myResoureGroup
export AZURE_STORAGE_ACCOUNT=mystorageaccount
export AZURE_STORAGE_ACCESS_KEY=<storage_account_key>

#Create the new share


az storage share create –name myshare

#Create a directory in the new share


az storage directory create –name mydirectory –share-name myshare

#Upload a file
az storage file upload –source ./instructions.txt –share-name myshare –path mydirectory

Using file shares


To access an Azure file share from an Azure VM running Windows or from an on-premises Windows
computer, run the net use command. The following command will map drive Z to the reports share,
where the storage account is called “adatum12345” and the storage access key is
PlsDTS0oEJWWQ8YOiVbL5kvow0/yg==:

Mapping a drive to an Azure file share from Windows


net use z: \\adatum12345.file.core.windows.net\reports /u:adatum12345
PlsDTS0oEJWWQ8YOiVbL5kvow0/yg==

If you want a drive mapping to persist across reboots, then you need to store the credentials you used to
map the drive, including the storage account name and its key in Windows Credential Manager. You can
use for this purpose either the graphical interface of Credential Manager or the cmdkey command-line
tool.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 6-19

To mount an Azure file share from a Linux Azure VM, run the mount –t cifs command. The following
command creates a mount point, /mnt/mountpoint, and uses it to mount the reports share, where the
storage account is called adatum12345 and the storage access key is
PlsDTS0oEJWWQ8YOiVbL5kvow0/yg==:

Mapping a drive to an Azure file share from Linux


mkdir -p /mnt/mymountpoint
sudo mount -t cifs //adatum12345.file.core.windows.net\reports -o vers=3.0,username=
adatum12345,password=PlsDTS0oEJWWQ8YOiVbL5kvow0/yg==,dir_mode=0777,file_mode=0777

If you want the mount to persist across reboots, you need to add the following line to the /etc/fstab file:

Persisting a mount to an Azure file share from Linux


//adatum12345.file.core.windows.net/reports /mymountpoint cifs
vers=3.0,username=adatum12345,password=PlsDTS0oEJWWQ8YOiVbL5kvow0/yg==,dir_mode=0777,file
_mode=0777

Managing Azure table and queue storage


Typically, applications create tables and queues
programmatically. Applications also are
responsible for populating tables with entities and
writing messages to queues, and for reading and
processing that content afterward. As a storage
administrator, you can also view and manage
tables and queues with tools such as Storage
Explorer. The Azure portal, Azure PowerShell, and
Azure CLI also provide a basic method for
managing tables and queues.
For example, you could use the following Azure
PowerShell script to create a table:

Creating a storage table by using Azure PowerShell


$storageAccount = ‘mystorageaccount’
$storageKey = (Get-AzureRmStorageAccountKey –ResourceGroup ‘myResourceGroup’ -
StorageAccountName $storageAccount).Primary
$context = New-AzureStorageContext -StorageAccountName $storageAccount -StorageAccountKey
$storageKey
New-AzureStorageTable -Name ‘MyTable’ -Context $context

You could achieve the same outcome by using the following Azure CLI commands:

Creating a storage table by using Azure CLI


az storage account keys list –account-name mystorageaccount –resource-group
myResoureGroup
export AZURE_STORAGE_ACCOUNT=mystorageaccount
export AZURE_STORAGE_ACCESS_KEY=<storage_account_key>

az storage table create –name MyTable


MCT USE ONLY. STUDENT USE PROHIBITED
6-20 Planning and implementing Azure Storage

To create a new messaging queue by using Azure PowerShell, run the following commands:

Creating a storage queue in Azure PowerShell


$storageAccount = ‘mystorageaccount’
$storageKey = (Get-AzureRmStorageAccountKey –ResourceGroup ‘myResourceGroup’ -
StorageAccountName $storageAccount).Primary
$context = New-AzureStorageContext -StorageAccountName $storageAccount -StorageAccountKey
$storageKey
New-AzureStorageQueue -Name myqueue -Context $context

To achieve the same outcome by using Azure CLI, you could use the following commands:

Creating a storage table by using Azure CLI


az storage account keys list –account-name mystorageaccount –resource-group
myResoureGroup
export AZURE_STORAGE_ACCOUNT=mystorageaccount
export AZURE_STORAGE_ACCESS_KEY=<storage_account_key>

az storage queue create –name myqueue

Managing access to Azure Storage


Security is vitally important in any cloud solution.
Azure Storage offers a number of mechanisms that
protects its content from unauthorized access.
These mechanisms include storage account keys,
shared access signatures, stored access policies,
Azure Storage firewall, and role-based access
control (RBAC). In this topic, you will see how to
implement and manage each of them.

Storage access keys


Azure automatically generates a primary and
secondary access key for each storage account.
The knowledge of either of them provides full
control over the storage account from management utilities and client applications. The Azure portal
offers a convenient way to copy both keys to the Clipboard. Alternatively, you can retrieve them by
invoking the Get-AzureRmStorageAccountKey Azure PowerShell cmdlet or az storage account keys
list Azure CLI command.

For example, the following Azure PowerShell cmdlet retrieves the storage keys for a storage account
named “myaccount” in the resource group named “myResourceGroup” in the current Azure subscription:

Obtaining storage keys by using Azure PowerShell


Get-AzureRmStorageAccountKey –ResourceGroupName ‘myResourceGroup’ StorageAccountName
‘myaccount’

To achieve the same outcome by using Azure CLI, you would run the following command:

Obtaining storage keys by using Azure CLI


az storage account keys list–-resource-group myResourceGroup –account-name myaccount
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 6-21

Having two storage keys allows you to regenerate one of them without disrupting applications that
require continuous access to the storage account. For example, if you regenerate the primary key,
applications can still successfully authenticate if they reference the secondary key. Next, you can repeat
this process to regenerate the secondary key, starting with modifying your applications by pointing them
to the new primary key.
To regenerate the primary access key, use the Azure portal or run the New-
AzureRmStorageAccountKey cmdlet:

Regenerating the primary key by using Azure PowerShell


New-AzureRmStorageAccountKey -KeyType Primary –ResourceGroupName ‘myResourceGroup’ -
StorageAccountName myaccount

To achieve the same outcome by using Azure CLI, run the az storage account keys renew command:

Regenerating the primary key by using Azure CLI


az storage account keys renew –account-name myaccount –key primary –resource-group
myResourceGroup

Shared access signatures


The automatically generated primary and secondary access keys provide full administrative access to the
corresponding storage account, which is not suitable for scenarios that necessitate more restrictive
privileges. To satisfy this requirement, Azure Storage also supports the Shared Access Signature (SAS)
authentication mechanism. SAS-based authentication allows you to limit access to designated blob
containers, tables, queues, and file shares only, or even to narrow it down to individual resources such as
blobs, ranges of table entities, and files. Shared access signatures also offer the ability to specify the set of
operations that are permitted on these resources. Additionally, you can limit the validity of shared access
signatures authentication tokens by assigning a start and end date, and the time of the delegated access.
SAS also allows you to restrict access to one or more IP addresses from which a request originates. In
addition, by adjusting SAS parameters, you can enforce the use of HTTPS, rejecting any HTTP requests.
Microsoft also supports account-level shared access signatures. This functionality allows you to delegate
permissions to perform service-level operations, such as creating blob containers or file shares.

A shared access signature takes the form of a Uniform Resource Identifier (URI), which is signed with the
storage account key. An application or a user with the knowledge of that URI can connect to the
corresponding storage account resources and perform delegated actions within the period that the token
validity parameters defined.
Most commonly, applications rely on the REST API to generate shared access signature URIs. However,
you can also create them by using the Azure portal, Azure PowerShell, or Azure CLI. For example, the
New-AzureStorageRmContainerSASToken Azure PowerShell cmdlet and the az storage container
generate-sas command generate a shared access signature token for a blob container in a storage
account.

Stored access policies


While shared access signatures allow you to narrow down the scope of privileges and duration of access
to content for an Azure storage account, their management presents some challenges. In particular,
revoking access that was granted directly through a shared access signature requires replacing the storage
account keys with which its URI was signed. Unfortunately, such an approach is disruptive because it
invalidates any other currently configured connections to the storage account that rely on the same
storage account key.
MCT USE ONLY. STUDENT USE PROHIBITED
6-22 Planning and implementing Azure Storage

To address this challenge, Azure Storage supports stored access policies. You define such policies on the
resource container level, including blob containers, tables, queues, or file shares, by specifying the same
parameters that you would otherwise assign directly to a shared access signature, such as the level of
permissions or start and end of the token validity. After a shared access policy is in place, you can
generate shared access signature URIs that inherit its properties. Revoking policy-based shared access
signature tokens requires modifying or deleting the corresponding policy only, without affecting access
granted via storage account keys or shared access signature URIs that are associated with other policies.

Additional Reading: For more information about using shared access signatures and
stored access policies, refer to: “Shared Access Signatures, Part 1: Understanding the shared
access signature model” at: http://aka.ms/R96g60

Azure Storage firewall and virtual network service endpoints


By default, every Azure Storage account has a public endpoint, which is reachable from any computer or
device with internet access. You can control who accesses the content of the storage account by using the
mechanisms described earlier in this topic. In addition, in many scenarios, you might want to limit network
access to connections originating from individual public IP addresses or IP address ranges. To accomplish
this, you can use Azure Storage firewall.
When you turn on Azure Storage firewall by specifying IP address ranges from which traffic will be
allowed, you automatically block traffic originating from all remaining IP addresses. This might affect
users’ access to the storage account from the Azure portal and their use of functionality such as logging
or diagnostics. To avoid these unintended consequences, you should:

• Ensure that the storage access requests, including those initiated from the Azure portal, originate
from the allowed range of IP addresses.
• Configure exceptions that allow read access necessary for collection of storage logs and storage
metrics. You can enable these exceptions from the Firewalls and virtual networks blade of the
storage account in the Azure portal.

• Configure exceptions that allow access from Microsoft trusted services, including Azure DevTest Labs,
Azure Event Grid, and Azure Event Hubs. This configuration option is also readily available from the
Firewalls and virtual networks blade of the Azure portal.
To restrict access to a storage account from your on-premises environment, you should identify the public
IP addresses associated with your edge network devices and include them in the firewall configuration.
When using ExpressRoute public or Microsoft peering, you should include two public IP addresses
through which the ExpressRoute circuit connects to Microsoft Edge.

In addition to restricting access to one or more public IP addresses, you can also restrict access to a
storage account to traffic originating from designated subnets of Azure virtual networks. This capability
leverages the Service Endpoints for Azure Storage functionality of Azure virtual networks.

Service endpoints represent subnets of virtual networks in the same Azure region as the storage account
and, in the case of GRS and RA-GRS storage accounts, the paired Azure region. By associating virtual
network subnets with a storage account, you can ensure that storage accounts are accessible exclusively
from within your private IP address space over the Azure backbone network.

Service endpoints apply exclusively to traffic originating from the designated subnets of a virtual network.
This means that you cannot use this functionality to provide exclusive access to a storage account from
another virtual network via VNet peering or from an on-premises network via ExpressRoute private
peering.

Note: For more information regarding service endpoints, refer to Module 2 of this course.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 6-23

RBAC
To control delegated management of Azure Storage resources, you can use RBAC. Note that RBAC applies
to managing storage accounts (control plane). The earlier methods presented in this topic are applicable
to restricting access to the content of a storage account (data plane).

RBAC includes a few predefined roles that provide delegated access to Azure storage accounts, including
Reader, Contributor, Storage Account Contributor, and Virtual Machine Contributor. If these roles are not
flexible enough, you can define custom ones. Their definitions consist of a list of permitted and prohibited
operations and assignable scopes to which these operations apply.

Additional Reading: For more information about RBAC, refer to: “Azure Role-based Access
Control” at: http://aka.ms/Jq63oa

Note: Managed disks provide more granular control over access to virtual machine disk
files by using RBAC. Module 3 of this course, “Implementing virtual machines,” covers managed
disks in more detail.

Monitoring Azure Storage with Azure Storage Analytics


Monitoring and diagnostics features are built into
the functionality of any standard Azure storage
account, allowing you to view, record, and analyze
its performance and utilization levels so that you
can adjust your storage design according to your
workloads’ demands.

Note: Monitoring and diagnostics are not


available for Azure Premium Storage accounts.

Managing diagnostics
By default, storage account diagnostics collect aggregate and per-API metrics for blob, table, and queue
storage, and retain them for seven days. The diagnostics configuration settings are accessible from the
Diagnostics blade in the Azure portal. From there, you can perform the following actions:

• Set the retention period to a value between 1 and 365 days.

• Selectively disable or enable aggregate metrics for each type of storage service. This includes data
such as the volume of ingress and egress traffic, availability, capacity, latency, or percentage of
successful access requests aggregated for the Blob, Table, Queue, and File services.

• Selectively disable or enable per-API metrics. This provides more granular control, allowing you to
decide if you should collect aggregates of individual types of storage API operations.

• Selectively disable or enable logs for the Blob, Table, and Queue services. This allows you to view the
details of each operation and is helpful in diagnosing the causes of poor performance or identifying
unauthorized access attempts.

Note: At the time of authoring this course, logs are not available for the Azure Storage File
service. Metrics and logging are not available for the ZRS classic storage accounts.
MCT USE ONLY. STUDENT USE PROHIBITED
6-24 Planning and implementing Azure Storage

Note: To view logs, you can use any of the Azure Storage tools described earlier in this
lesson. Logs reside in the $logs blob container of the storage account. There are also designated
containers that host capacity and availability metrics.

To configure diagnostics settings for an existing storage account by using the Azure portal, follow these
steps:

1. In the Azure portal, on the Hub menu, click All services.

2. In the list of services, click Storage accounts.

3. On the Storage accounts blade, click the storage account that you want to configure.

4. On the storage account blade, click any graph in the Monitoring section.

5. On the Metric blade, click Diagnostics settings.


6. If diagnostics are disabled, on the Diagnostics blade, click On below the Status label.

7. Select the check boxes next to the metrics or logs that you want to collect.

8. Use the slider at the bottom of the blade to set the number of days (from 1 through 365) to retain
diagnostics data.

9. Click Save.

Note: Enabling diagnostics increases the storage account–related charges, because


collected data resides in tables and blobs in the same storage account.

Additional Reading: You can configure diagnostics settings automatically when you
provision a storage account by using a Resource Manager template. For details, refer to:
“Automatically enable Diagnostic Settings at resource creation using a Resource Manager
template” at: https://aka.ms/pil2av

After you enable diagnostics for a storage account, you can display collected data in the Monitoring
section on the storage account’s blade in the Azure portal.

To add a metric to the monitoring chart, follow these steps:

1. On the Azure portal, click the Monitoring lens of the account’s blade, and then click Edit chart.

2. On the Edit Chart blade, select the Time Range (past hour, today, past week, or custom).

3. In the drop-down list box below the Time Range section, select the storage service type for which
you want to display metrics (blob, queue, table, or file).

4. Select check boxes next to the individual metrics that you want to display in the chart.

5. Click OK.

Managing alerts
You can configure alerts for any storage resource based on the metrics that you are collecting. An alert
indicates when the value of a metric that you designated satisfies a set of criteria that you defined. The
criteria include a condition (such as greater than), a threshold value that depends on the type of metric,
and a time period during which the condition must be satisfied. You can configure an alert to send an
email to owners, contributors, or readers of the target resource, in addition to sending an email to an
arbitrary email address. Additionally, as part of the alert definition, you can specify a Webhook, which
designates an HTTP or HTTPS endpoint to which the alert would be routed.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 6-25

Perform the following steps to set up an alert:

1. On the storage account’s blade on the Azure portal, click any graph in the Monitoring section.

2. On the Metrics blade, click Add metric alert.

3. In the Add an alert rule blade, specify the:

o Name. This is the name of the alert.

o Description. This is the description of the alert.

o Alert on. This is the source of the alert. In this case, its value is set to metric.

o Subscription. This is the subscription where the monitored resource resides.

o Resource group. This is the resource group containing the resource.

o Resource. This is the name of the target resource (storage account and service type).

o Metric. This is the metric that the rule will monitor.

o Condition. This is greater than, greater than or equal to, less than, less than or equal to.

o Threshold. This value corresponds to the condition that you specified.


o Period. This is the period during which a condition is evaluated (from five minutes through six
hours).

o Email owners, contributors, and readers. This is a check box that needs to be enabled or
disabled.

o Additional administrator emails. This is a text box in which you can specify one or more email
accounts.

o Webhook. This is the HTTP or HTTPS endpoint to which the alert will route.

o Take action. This option allows you to specify an Azure logic app, whose execution the alert will
automatically trigger.

4. Click OK.

Monitoring performance of Azure Premium Storage accounts


To monitor performance of an Azure Premium Storage account, you can use standard utilities available
from an Azure VM that contains the virtual disk files residing in that storage account. Such utilities include
Performance Monitor in Windows operating systems or iostat in the Linux operating system. You can also
gather diagnostics data by using the Azure VM Diagnostics extension and store it in a standard storage
account.

Note: Azure Storage integrates with Azure Monitor. This provides a centralized interface for
viewing logs from a wide range of Azure resources. In addition, Azure Monitor includes support
for collecting metrics from premium storage accounts. At the time of authoring this course, this
functionality is in public preview. For more information regarding Azure Monitor, refer to Module
11 of this course.
MCT USE ONLY. STUDENT USE PROHIBITED
6-26 Planning and implementing Azure Storage

Demonstration: Using Azure Storage


In this demonstration, you will see how to:

• Create an Azure storage account.

• Create an Azure Files share.


• Mount an Azure file share on an Azure VM.

Check Your Knowledge


Question

You need to provide a customer with time-limited access to the content of a blob
container in an Azure Storage account. You must ensure that you can revoke the
access without affecting other customers who rely on the same storage account key.
What should you do?

Select the correct answer.

Give the customer the primary access key.

Give the customer the secondary access key.

Configure the container as public.

Give the customer a shared access signature.

Configure a stored access policy. Give the customer a shared access signature
based on the stored access policy.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 6-27

Lesson 3
Exploring Azure hybrid storage solutions
Azure offers a range of services that leverage Azure Storage in hybrid scenarios. The Microsoft Azure
StorSimple offering implements cross-premises, multi-tier storage. Azure File Sync provides the ability to
build distributed multi-tier file services. With the Import/Export service and Azure Data Box, you can
transfer tens of terabytes of data between on-premises data stores and Azure Storage without regard for
availability of sufficient network bandwidth. In this lesson, you will learn about the capabilities and
characteristics of these services.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe hybrid storage solution based on StorSimple.

• Explain how to perform data transfer by using Azure Import/Export service and Azure Data Box.
• Describe the architecture and high-level implementation steps of Azure File Sync.

Hybrid storage capabilities of StorSimple


StorSimple is a multipurpose, cross-premises
storage solution that leverages Azure storage and
compute capabilities. It provides features such as:
• Multi-tier storage for a variety of workloads,
such as static archives, moderately used file
shares, and highly dynamic content such as
SQL Server databases or virtual machine disks.

• Automated data archival.

• Snapshot-based backups.
• Disaster recovery.

In the context of hybrid scenarios, the core component of StorSimple-based solutions is an on-premises
appliance, which is available as one of the following:
• A StorSimple 8000 series physical device.

• A virtual device also referred to as a StorSimple virtual array, running on the Microsoft Hyper-V or
VMware ESX platform.

The most common use of StorSimple involves implementing hybrid storage, with Azure serving as the tier
in which infrequently accessed content resides. StorSimple virtual arrays include a single local storage tier
managed by the same hypervisor that hosts the virtual device. StorSimple 8000 series devices contain
both SSD and HDD tiers. Data is transferred automatically between tiers according to the usage patterns
and policies that you define. However, you have the ability to designate individual volumes that should
always remain available locally by configuring them as locally pinned. This makes the StorSimple devices
suitable for workloads, such as virtual machines or SQL Server databases, that cannot tolerate latency
associated with the use of secondary or tertiary tiers. For any data that qualifies for upload to Azure
Storage, the device automatically applies deduplication, compression, and encryption to ensure maximum
efficiency and security. StorSimple also offers support for hot and cool access tiers of Azure blob storage
accounts, which allows you to further optimize the cost effectiveness of cloud storage usage.
MCT USE ONLY. STUDENT USE PROHIBITED
6-28 Planning and implementing Azure Storage

A StorSimple 8000 series physical device operates as an Internet Small Computer System Interface (iSCSI)
target, delivering functionality equivalent to an enterprise-level storage area network solution. A
StorSimple virtual device can function either as an iSCSI target or an SMB file server. A virtual device is
more suitable for branch office scenarios, where higher latency and lack of high availability are
acceptable.
In addition to serving as a multitier storage solution, StorSimple allows you to perform on-demand and
scheduled backups to Azure Storage. These backups take the form of incremental snapshots, which limit
the space required to accommodate them and which complete much more quickly than differential or full
backups. You can use these backups to perform restores on the on-premises device. The backup capability
also offers several other advantages. StorSimple includes support for deploying into Azure virtual
appliances, known as StorSimple Cloud Appliances. This, in turn, makes it possible for you to duplicate
your on-premises environment in Azure by mounting backed-up volumes onto the Azure virtual
appliance. This facilitates a range of business scenarios, including performing nondisruptive tests against
copies of live data, carrying out data migrations, or implementing disaster recovery. For additional
resiliency, you can configure Azure Storage hosting backed-up content as ZRS or GRS. To accommodate
disaster recovery workloads that require higher throughput or lower latency of I/O operations, you can
create a virtual device that provides access to Azure Premium Storage.

To manage these physical and virtual StorSimple components, you can use graphical and command-line
utilities and interfaces, including:

• StorSimple Device Manager service. This interface, available from the Azure portal, provides the ability
to administer physical or virtual StorSimple devices and appliances, including their services, volumes,
alerts, backup policies, and backup catalogs. Note that you must use one instance of StorSimple
Manager to manage physical devices and Azure virtual appliances and another instance to manage
virtual devices.

• Local web user interface. This interface allows you to perform the initial setup of a virtual or physical
device and register it with the StorSimple Device Manager service.
• Windows PowerShell for StorSimple. This is a collection of cmdlets that perform actions specific to
physical devices, such as registration, network and storage configuration, installation of updates, and
troubleshooting. To access these cmdlets, you must either connect directly to the target appliance via
its serial port or establish a Windows PowerShell remoting session.

• Azure PowerShell StorSimple cmdlets. This is a collection of cmdlets that perform service-level
administrative tasks, primarily those available via the StorSimple Manager interface.

• StorSimple Snapshot Manager. This is a Microsoft Management Console snap-in for initiating and
administering backups, restores, and cloning operations.

• StorSimple Adapter for SharePoint. This is a plug-in for the Microsoft SharePoint Administration
portal that facilitates moving SharePoint SQL Server content databases to Azure blob storage.

StorSimple pricing
You can purchase StorSimple as part of your existing Microsoft Enterprise Agreement or contact
storagesales@microsoft.com regarding the procurement process.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 6-29

Cross-premises data transfer with the Azure Import/Export service and


Azure Data Box
The Import/Export service allows you to transfer
data by shipping physical disks between on-
premises locations and Azure Storage whenever
the data volume makes relying on network
connectivity too expensive or unfeasible. To
handle the shipment, you can use commercial
carriers, including FedEx, UPS, and DHL.

Import/Export service
Performing the transfer involves creating either
import or export jobs, depending on the transfer
direction:

• You create an import job to copy data from your on-premises infrastructure onto disks that you
subsequently ship to the Azure datacenter that is hosting the target storage account.
• You create an export job to request that data currently held in an Azure Storage account be copied to
disks that you ship to the Azure datacenter. When the disks arrive, the Azure datacenter operations
team completes the request and ships the disks back to you.
A single job can include up to 10 disks. You can create jobs directly from the Azure portal. You can also
accomplish this programmatically by using Azure Storage Import/Export REST API.

Import/Export service requires the use of internal SATA II/III HDDs or SSDs. Each disk contains a single
NTFS volume that you encrypt with BitLocker when preparing the drive. To prepare a drive, you must
connect it to a computer running a 64-bit version of the Windows client or server operating system and
run the WAImportExport tool from that computer. The WAImportExport tool handles data copy, volume
encryption, and creation of journal files. Journal files are necessary to create an import/export job and
help ensure the integrity of the data transfer.

Additional Reading: The WAImportExport tool is available from Microsoft Download site
at: https://aka.ms/Welhs7

Import/Export service supports the following types of Azure Storage operations:

• Exporting block, page, and append blobs from Azure blob and general purpose v1 storage accounts.
• Importing data into block, page, and append blobs in Azure blob and general purpose v1 storage
accounts.

• Importing data into Azure files in Azure general purpose v1 storage accounts.

To perform an import, follow these steps:

1. Create an Azure Storage account.


2. Identify the number of disks that you will need to accommodate all the data that you want to
transfer.

3. Identify a computer that you will use to perform the data copy, attach physical disks that you will ship
to the target Azure datacenter, and install the WAImportExport tool.

4. Run the WAImportExport tool to copy the data, encrypt the drive with BitLocker, and generate
journal files.
MCT USE ONLY. STUDENT USE PROHIBITED
6-30 Planning and implementing Azure Storage

5. Use the Azure portal to create an import job referencing the Azure Storage account. As part of the
job definition, specify the destination address representing the Azure region where the Azure Storage
account resides.

6. In the Azure portal, specify the return address and your carrier account number. Microsoft will ship
the disks back to you once the import process is complete.

7. Ship the disks to the destination that you specified when creating the import job and update the job
by providing the shipment tracking number.

Once the disks arrive at the destination, the Azure datacenter staff will carry out data copy to the target
Azure Storage account and ship the disks back to you.

In order to perform an export, follow these steps:

1. Identify the data in the Azure Storage blobs that you intend to export.

2. Identify the number of disks that you will need to accommodate all the data you want to transfer.

3. Use the Azure portal to create an export job referencing the Azure Storage account. As part of the job
definition, specify the blobs you want to export, the return address, and your carrier account number.
Microsoft will ship your disks back to you after the export process is complete.

4. Ship the required number of disks to the Azure region hosting the storage account. Update the job
by providing the shipment tracking number.
Once the disks arrive at the destination, Azure datacenter staff will carry out data copy from the storage
account to the disks that you provided, encrypt the volumes on the disks by using BitLocker, and ship
them back to you. The BitLocker keys will be available in the Azure portal, allowing you to decrypt the
content of the disks and copy them to your on-premises storage.

Azure Data Box


Azure Data Box is a tamper-resistant physical network-attached storage (NAS) appliance that allows you
to securely move large amounts of data into Azure. The appliance features 256-bit Advanced Encryption
Standard (AES) encryption, has a 100-TB capacity, and supports SMB and Common Internet File System
(CIFS) protocols. To transfer on-premises data to Azure, you should first order the appliance from
Microsoft and have it shipped to your physical location. Next you should attach it to your network,
perform data copy, and ship it back to Microsoft. You can monitor the progress of this process by using
the Azure portal. Microsoft handles all end-to-end logistics.

Azure Data Box integrates with a number of non-Microsoft storage solutions from vendors such as
Commvault, Veritas, Veeam, and NetApp.

Note: At the time of authoring this content, Azure Data Box is in public preview.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 6-31

Hybrid file services with Azure File Sync

Azure File Sync benefits


The primary purpose of the Azure File Sync service
is to provide synchronization across shares
residing on multiple Windows Server 2012 R2 or
Windows Server 2016 file servers. To accomplish
this, the service relies on an Azure Files share as a
synchronization hub, which additionally hosts the
master copy of content synchronized from file
servers. There is no impact on users’ experience;
they continue to access the shared content via
drive mappings or Universal Naming Convention
(UNC) paths pointing to their local Windows Server–based file server.

Optionally, Azure File Sync also allows you to combine synchronization with data tiering. In this scenario,
individual on-premises file servers operate as the hot tier, providing direct access to most frequently used
content. Azure Files operates as the cold tier, hosting less frequently used data. When a user attempts to
access a file residing in the cold tier, the file is automatically downloaded to the local file server.

Tiering is activated when the percentage of free disk space on the volume hosting the server endpoint
decreases below the limit that you specify. At that point, the local agent will automatically start moving
files with the oldest, last modified, and last accessed attributes to the Azure Files share.

Another benefit of Azure File Sync is the ability to implement centralized backup of Azure Files shares with
Azure Backup. This ability leverages Azure Files snapshot functionality, with up to 120 day retention
period.

Azure File Sync architecture


A sync group consists of on-premises Windows file servers and a corresponding Azure Files share that are
synchronizing their content. The Azure Files share constitutes the cloud endpoint, while the file system
paths representing synchronized content on Windows file servers are referred to as server endpoints. A
Windows file server can contain multiple, nonoverlapping server endpoints, but can be a member of one
sync group only.

Storage Sync Service is the core component of Azure File Sync that manages the relationship between the
cloud endpoint and server endpoints within one or more sync groups. You can create multiple Storage
Sync Service instances in the same Azure subscription.

The synchronization process is agent based. You must install the agent on every Windows file server that
is a member of a sync group. After installing the agent, you must register the server with the Storage Sync
Service and make it part of a sync group.

Azure File Sync preserves a file’s timestamp and its access control list (ACL) entries. As a result, all server
endpoints have matching content, including file system permissions. However, note that these permissions
do not apply when accessing the content of the Azure Files share directly. In addition, although you can
directly modify the content of that share, it might take up to 24 hours for this change to replicate to
server endpoints. Changes to server endpoints are synchronized nearly immediately.

Note: At the time of authoring this content, Azure Files shares do not support ACLs.
MCT USE ONLY. STUDENT USE PROHIBITED
6-32 Planning and implementing Azure Storage

Note: At the time of authoring this content, Azure File Sync does not support global file
locks. Concurrent changes to a file on two different server endpoints will result in multiple,
uniquely named copies of that file.

Implementing Azure File Sync


The recommended method of implementing Azure File Sync involves the following steps:

1. Create a general purpose v1 Azure Storage account in the Azure region closest to your physical
location.

2. Provision a Storage Sync Service in your Azure subscription in the same Azure region as the storage
account.

3. Create a file share in Azure Files of the storage account.


4. Create a sync group.

5. Add the Azure Files share to the sync group.

6. Download the Azure File Sync agent and install it on the Windows server hosting the share that you
want to synchronize across multiple file servers.

7. Register the server with the Storage Sync Service.

8. Add the server endpoints to the sync group.

9. Optionally, enable tiering and specify the percentage of free disk space that must be available on the
volume where the server endpoint resides.

10. Wait until the sync process to the Azure Files share completes.

11. At this point, you can add more server endpoints to the same sync group by repeating steps 6 and 7.

Note: At the time of authoring this content, Azure File Sync is in public preview.

Check Your Knowledge


Question

What type of data transfer operations does Azure Export/Import service support?

Select the correct answer.

Exporting block blobs from Azure general purpose v1 storage accounts

Importing block blobs into Azure general purpose v1 storage accounts

Exporting files from Azure Files of a general purpose v1 storage accounts

Importing files into Azure Files of a general purpose v1 storage account

Importing tables into the Azure Table storage service of a general purpose v1
storage account
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 6-33

Lesson 4
Implementing Azure CDNs
Azure provides the CDN service, which decreases the time it takes to download web content by first
distributing it across multiple locations around the world and then delivering it from the location that is
closest to the consumer of that content. This lesson presents the concept and architecture of CDNs and
describes the process of implementing Azure CDNs.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe the purpose and functionality of CDNs.

• Describe CDN architecture.

• Explain how to cache blob content by using Azure CDNs.

• Explain how to cache cloud services content by using Azure CDNs.


• Explain how to use custom domain addresses with Azure CDNs.

Introduction to CDNs
The delivery speed of internet-resident content is
a key factor in satisfying consumers of media and
web-based applications. Content Delivery Network
represents a collection of globally distributed
servers at locations referred to as points of
presence (POP), whose purpose is to maximize this
speed. Content Delivery Network accomplishes
this objective by caching web and media content
across its servers and then delivering it from the
server that is closest to the consumer of that
content. More specifically, by default, when a user
or app requests content configured for integration
with Content Delivery Network, Azure attempts to retrieve such content from the nearest Content
Delivery Network server. If the content is not available, Azure retrieves it from the origin, and the Content
Delivery Network servers cache it to make it available for subsequent requests.

CDNs offer a number of advantages:

• Improved user-experience, especially if users reside in areas distant from the original content location.
• Protection of published content from distributed denial of service attacks. Azure CDNs include
functionality that detects such attacks. Providing multiple copies of content serves as an additional
mitigating factor.

• Improved scalability by eliminating performance bottlenecks that are associated with hosting content
in a single location.

• Increased resiliency by eliminating a single point of failure. In particular, if one CDN node becomes
unavailable, the service transparently redirects requests to the nearest node.
MCT USE ONLY. STUDENT USE PROHIBITED
6-34 Planning and implementing Azure Storage

Note: CDNs are intended primarily for static content. Dynamic content needs to be
refreshed constantly from the content provider, minimizing and potentially eliminating any
associated CDN benefits. You can, however, provide efficient caching in some scenarios that
involve serving different content depending on input values incorporated into the web request.

Additional Reading: For more information, refer to: “Using CDN for Azure” at:
http://aka.ms/Aaa7h4

Additional Reading: For the latest POP list, refer to: “Azure Content Delivery Network
(CDN) POP Locations” at: http://aka.ms/P70n6a

Overview of CDN architecture


CDN caches content from a range of Azure
services, including Azure Storage blobs, Azure
Web Apps, and Azure Cloud Services. Additionally,
Content Delivery Network can cache content from
web apps residing in on-premises datacenters or
hosted by non-Microsoft cloud providers.

To improve your web app’s responsiveness by


leveraging CDN, you must create a CDN profile,
which serves as a logical grouping of endpoints,
representing the origins of cached content. When
a user requests the web app’s content, Azure
attempts to retrieve it from the nearest available
endpoint. If the content is not available, Azure retrieves it from the origin, and subsequently CDN
endpoints cache it.
A CDN profile constitutes an administrative and billing unit. The cost depends on the pricing tier of the
profile, the volume of outbound data transfers associated with transferring content to the CDN endpoints,
and, in the case of Azure Storage blobs, the amount of storage transactions. Within the profile, you can
manage additional features, such as:

• Geo-filtering. This includes blocking or allowing access to cached content from designated
countries/regions.

• Analytics and reporting. Core analytics include information about CDN usage patterns, including such
parameters as bandwidth, data transferred, or cache hit ratio. Advanced HTTP reports provide similar
information in an easy-to-review format. For example, geography reports show regions from which
requests for your content originate. Daily Summary reports aggregate such statistics as number of hits
or amount of data transferred from your points of origin.

• Delivery rules. By using the delivery rules, you can alter default processing of HTTP requests, allowing
you to block different types of content or return customized HTTP headers. You can also enforce
different caching policies depending on properties of incoming requests, such as client IP address or
the request header.

• Asset preloading. By default, content is copied from its origin into the cache on CDN servers only in
response to incoming requests. The first request for such content will likely incur extra latency. By
preloading content (referred to in this case as assets), you can eliminate this initial delay.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 6-35

• Purging. By default, cached content remains in the cache until its Time to Live (TTL) expires. However,
there might be situations where cache is out of sync with the origin. In such cases, you can use the
purge capability to remove outdated content from the cache. As a result, the subsequent request will
trigger retrieval of up-to-date content from the origin.
Availability of these features depends on the CDN product. At the time of authoring of this course, there
are three CDN products: Azure CDN Standard from Akamai, Azure CDN Standard from Verizon, and Azure
CDN Premium from Verizon.

Additional Reading: For more information regarding features available with each CDN
product, refer to: “Overview of the Azure Content Delivery Network (CDN)” at:
https://aka.ms/ke4fqv

A CDN profile can contain up to ten endpoints, and there is a limit of eight CDN profiles per Azure
subscription. Each endpoint designates an origin of cached content and can point to an:

• Azure Storage blob.

• Azure Web app that is associated with a Standard or Premium App Service plan.

• Azure cloud service.

• Azure Media Services streaming endpoint.


• Custom origin. A custom origin can represent any web location accessible via HTTP or HTTPS,
including your web apps hosted in perimeter networks of on-premises datacenters.

For every endpoint, you can configure a number of settings, such as:

• Compression. You can enable or disable this setting.

• Query string caching behavior. You use this setting to customize caching behavior, depending on
whether the request to the endpoint includes a query string. For example, by selecting the Cache
every unique URL option, CDN will cache content from a URL ending with “page1.ashx?q=one”
separately from content from a URL ending with “page1.ashx?q=two”. Alternatively, you can cache
the same content for both of these requests by choosing the Ignore query strings option, or ignore
caching altogether when you choose the Bypass caching for query string option.

• Protocols. You use this setting to enable an endpoint for HTTP and HTTPS.

Creating CDN profiles and endpoints


To provision a CDN, you first need to create a CDN profile. To create a CDN profile, use the following
steps:

1. In the Azure portal, on the Hub menu, click +Create a resource.

2. On the New blade, click Web + Mobile.

3. On the Web + Mobile blade, click CDN.

4. On the CDN profile blade, specify the following:

o Name. Use a unique name in your current subscription and resource group.

o Subscription. This is your current subscription that should host the profile.

o Resource group. This is a new or existing resource group.


o Location. This Azure region will host the profile.

o Pricing tier. Choose between Premium Verizon, Standard Verizon, and Standard Akamai.
MCT USE ONLY. STUDENT USE PROHIBITED
6-36 Planning and implementing Azure Storage

o Create a new CDN endpoint now. Enable this check box if you want to create a CDN endpoint
while creating a CDN profile. You will need to provide a subset of settings described in the next
section, including name, origin type, and origin hostname.

o Pin to dashboard. Enable this if you want the CDN profile to appear directly on the dashboard.

5. Click Create.

To create a CDN endpoint within a CDN profile, follow these steps:

1. On the CDN profile blade, click + Endpoint.

2. On the Add an endpoint blade, specify the following:

o Name. This is a unique name in the azureedge.net Domain Name System (DNS) namespace.

o Origin type. This can be Storage, Cloud service, Web App, or Custom origin.

o Origin hostname. This is the name of the service that represents the origin type that you selected.

o Origin path. This designates the directory path of the content that CDN should retrieve from the
origin.
o Origin host header. This designates the host header value that should be sent to the origin with
each request. This is applicable if you host multiple virtual domains on a single target server.

o Protocol and origin port. This allows you to selectively enable or disable HTTP and HTTPS and
specify their respective ports.

o Optimized for. The values available for this setting depend on the pricing tier. They include
general web delivery, general media streaming, video on demand media streaming, large file
download, and dynamic site acceleration.
3. Click Add.

Using CDN to cache content from Azure blobs, Azure Web Apps, and
Azure Cloud Services
For a CDN to cache blobs, they must be accessible
anonymously. This effectively means that blobs
should reside in containers with an access type
property of either Blob or Container.
When you configure a CDN endpoint that points
to a public container in an Azure storage account
as its origin, you effectively define a new URL to
access blobs in the container via CDN. For
example, if you have a storage account named
”mystorageaccount” with a public container
named “public”, then the origin would be
designated by the combination of the origin
hostname and the origin path, yielding the URL http://mystorageaccount.blob.core.windows.net/public.
When you create an endpoint, you need to choose a unique name in the azureedge.net DNS namespace,
which represents the CDN cached content that is available at http://uniquename.azureedge.net/public.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 6-37

A blob stays in the CDN cache for a period referred to as the Time to Live (TTL), which is seven days by
default. You can modify this by assigning a custom TTL value to a blob. In such cases, Azure Storage
returns the TTL value as part of a Cache-Control header in response to a CDN caching request. To assign a
custom value to an Azure Storage blob, you can use Azure PowerShell, Azure CLI, Azure Storage Client
Library for .NET, REST APIs, or the Azure storage management tools described earlier in this module.
Similar to blob-based endpoints, cached content from Azure Web Apps and Azure Cloud Services has a
seven-day TTL by default. The TTL is determined by the value of the Cache-Control header in the HTTP
response from the origin. For Azure Web Apps and Azure Cloud Services, you can set this value by
specifying the system.webServer\staticContent\clientCache element in the applicationHost.config
file for your site or web.config files for your individual web apps. The setting dictates a custom TTL value
for all objects within the site or within the web app. Web app–level settings take precedence over site-
level settings. For ASP.NET applications, you can further customize TTL by assigning CDN caching
properties programmatically by setting the HttpResponse.Cache property.

Additional Reading: For more information about TTL with cloud services, refer to: “How to
Manage Expiration of Cloud Service Content in the Azure Content Delivery Network (CDN)” at:
http://aka.ms/Vx0qfy

Using custom domains to provide access to CDNs


In many scenarios, you might want to point to
CDN–cached content by using names that belong
to your own custom DNS namespace, rather than
referencing names in the default azureedge.net
namespace that CDN assigns.

To accomplish this, you first need to create a DNS


canonical name record (CNAME record) at your
domain registrar, which represents an alias of the
CDN endpoint’s FQDN. Next, you must include the
custom domain in the configuration of the
endpoint’s settings. During the second part of this
process, CDN verifies whether the CNAME record
actually exists.

Note: Remember that, by default, the CDN endpoint is not accessible via the newly
registered CNAME record for up to 90 minutes following the verification step. This is because of
the time it takes to propagate custom domain settings across all CDN nodes. To avoid this delay,
you can preregister the asverify subdomain within your custom domain and use it for
verification.

Additional Reading: For details regarding using the asverify subdomain, refer to: “How to
map Custom Domain to Content Delivery Network (CDN) endpoint” at: https://aka.ms/ivysl3
MCT USE ONLY. STUDENT USE PROHIBITED
6-38 Planning and implementing Azure Storage

Check Your Knowledge


Question

What is the default period during which content remains cached by a CDN?

Select the correct answer.

One day

Two days

Five days

Seven days

14 days
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 6-39

Lab: Planning and implementing Azure Storage


Scenario
The IT department at Adatum Corporation uses an asset management application to track IT assets such
as computer hardware and peripherals. The application stores images of asset types and invoices for
purchases of specific assets. As part of Adatum’s evaluation of Azure, you need to test Azure storage
features as part of your plan to migrate the storage of these images and invoice documents to Azure.
Adatum also wants to evaluate Azure File storage for providing SMB 3.0 shared access to installation
media for the asset management application client software. Currently, corporate file servers host the
media.

Objectives
After completing this lab, you will be able to:

• Provision and configure Azure Storage.

• Use Azure File storage.

Note: The lab steps for this course change frequently due to updates to Microsoft Azure.
Microsoft Learning updates the lab steps frequently, so they are not available in this manual. Your
instructor will provide you with the lab documentation.

Lab Setup
Estimated Time: 50 minutes
Virtual machine: 20533E-MIA-CL1

User name: Student

Password: Pa55w.rd
Before starting this lab, ensure that you have stepped through the “Preparing the demo and lab
environment” demonstration tasks at the beginning of the first lesson in this module and that the setup
script has completed.

Exercise 1: Creating and configuring Azure Storage


Scenario
Adatum currently stores images for IT assets as files in a local folder. As part of your Azure evaluation, you
want to test storing these images as blobs in Azure so that a new Azure-based version of the asset
management application can easily access them.
MCT USE ONLY. STUDENT USE PROHIBITED
6-40 Planning and implementing Azure Storage

Exercise 2: Using Azure File storage


Scenario
Adatum currently stores invoices for IT assets in the Microsoft Word format in a local folder. As part of
your evaluation of Azure, you want to test the uploading of these files to a file share in your Azure storage
account to make it easier for users to access them from VMs in Azure.

Question: The asset management application stores images of hardware components as


blobs and invoices as files. If the application also needed to search the location of each asset
by using an asset type, a unique asset number, and a text description of the location, what
storage options should you consider?
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 6-41

Module Review and Takeaways


Review Question

Question: Why should you co-locate storage accounts and the Azure services that use them?

Best Practices
When using Azure Storage, consider the following best practices:

• Choose the most appropriate storage type based on your application requirements and the format of
the data to store.

• Co-locate storage accounts and the services that use them in the same region.
MCT USE ONLY. STUDENT USE PROHIBITED
MCT USE ONLY. STUDENT USE PROHIBITED
7-1

Module 7
Implementing containers in Azure
Contents:
Module Overview 7-1
Lesson 1: Implementing Windows and Linux containers in Azure 7-2

Lab A: Implementing containers on Azure VMs 7-14

Lesson 2: Implementing Azure Container Service 7-16


Lab B: Implementing Azure Container Service (AKS) 7-32

Module Review and Takeaways 7-33

Module Overview
Hardware virtualization has drastically changed the IT landscape in recent years. The emergence of cloud
computing is one consequence of this trend. However, a new virtualization approach promises to bring
even more significant changes to the way you develop, deploy, and manage compute workloads. This
approach is based on the concept of containers.
In this module, you will learn about containers and how you can implement them in Microsoft Azure. You
will also learn about deploying and managing clusters of containers by using Azure Container Service with
open source container orchestration solutions.

Objectives
After completing this module, you will be able to:

• Implement Windows and Linux containers in Azure.

• Implement Azure Container Service.


MCT USE ONLY. STUDENT USE PROHIBITED
7-2 Implementing containers in Azure

Lesson 1
Implementing Windows and Linux containers in Azure
Azure provides a hosting platform for implementing Linux and Windows containers. This platform
provides the scalability, resiliency, and agility of the underlying infrastructure. At the same time, you can
use the same container management techniques that you use in your on-premises environment. In this
lesson, you will learn about the basic concepts related to containerization and its most prominent format,
which Docker offers. You will also learn how to deploy single and multicontainer workloads to Azure
virtual machines (VMs) and implement an Azure-based registry of Docker images.

Lesson Objectives
After completing this lesson, you will be able to:

• Explain the concept of containers.

• Explain the basic characteristics of Docker.


• Implement Docker hosts in Azure.

• Implement Docker containers in Azure.

• Create and deploy multicontainer workloads in Azure.

• Implement Azure Container Registry.

Demonstration: Preparing the lab environment


Perform the tasks in this demonstration to prepare the lab environment. The environment will be
configured while you progress through this module, learning about the Azure services that you will use
in the lab.

Important: The scripts used in this course might delete objects that you have in your
subscription. Therefore, you should complete this course by using a new Azure subscription. You
should also use a new Microsoft account that is not been associated with any other Azure
subscription. This will eliminate the possibility of any potential confusion when running setup
scripts.

This course relies on custom Azure PowerShell modules, including Add-20533EEnvironment to prepare
the lab environment, and Remove-20533EEnvironment to perform clean-up tasks at the end of the
module.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 7-3

Introduction to containers
Hardware virtualization made it possible to run
multiple isolated instances of operating systems
concurrently on the same physical hardware.
Containers are the next stage in the virtualization
of computing resources. Container-based
virtualization allows you to virtualize the operating
system. This way, you can run multiple
applications within the same instance of the
operating system, while maintaining isolation
between them. This means that containers within a
VM provide functionality similar to that of VMs
within a physical server.

To better compare the two, the following table lists the high-level differences between VMs and
containers.

Feature VMs Containers

Isolation Built into the hypervisor Relies on the operating-system support


mechanism

Required amount Includes the operating system Includes containerized app requirements
of memory and app requirements only

Startup time Includes the time it takes to start Includes only the time it takes to start the
the operating system, services, app and app dependencies, as the
apps, and app dependencies operating system is already running

Portability Portable, but the image is larger More portable, because the image includes
because it includes the only apps and their dependencies
operating system

Image automation Depends on the operating Based on a registry


system and apps

When compared with physical and virtual machines, containers offer a number of advantages, including:

• Increased flexibility and speed when developing and sharing the application code.

• Simplified application testing.


• Streamlined and accelerated application deployment.

• Higher workload density, resulting in improved resource utilization.

Support for containers relies on two capabilities that are part of the operating system kernel:

• Namespace isolation. Each container operates in its own isolated namespace, which provides the
resources necessary to run containerized applications, including file system or network ports, for
example. These resources map to the resources of the host operating system. When an application
makes a change to a file that is part of its namespace, the container performs a copy-on-write
operation. From that point on, the container keeps track of the differences between its version of the
modified file and the underlying file system resource.

• Resource governance. The host operating system controls the amount of resources, such as central
processing unit (CPU), random access memory (RAM), or network, that each of its containers can use.
This prevents any container from affecting the performance and stability of other containers.
MCT USE ONLY. STUDENT USE PROHIBITED
7-4 Implementing containers in Azure

Linux supports containers by relying on its cgroups functionality. Windows Server 2016 provides two
methods for hosting containers, each offering different degrees of isolation with different requirements:

• Windows Server containers. These containers provide app isolation through process and namespace
isolation technology. Windows Server containers share the operating system kernel with the container
host and with all other containers that run on the host. Although this provides a faster startup
experience, it does not provide complete isolation of the containers.

• Microsoft Hyper-V containers. These containers increase the level of isolation by running each
container in a highly optimized VM. In this configuration, the Hyper-V containers do not share the
operating system kernel of the container host. Effectively, this allows you to run Windows and Linux
containers on a Hyper-V host. For Linux containers, this also requires Windows Subsystem for Linux.

Note: At the time of authoring this content, there is no support for running Windows
containers on Linux.

Additional Reading: For more information regarding Windows containers, refer to:
“Windows Containers” at: https://aka.ms/Kterug

Introduction to Docker
At the time of authoring this course, the most
popular containerization technology is available
from Docker. Docker is a collection of open-source
tools and cloud-based services that provide a
model for packaging, or containerizing, app code
into a standardized unit. This standardized unit,
called a Docker container, is suitable for software
development, deployment, and management. A
Docker container is software wrapped in a
complete file system that includes everything it
needs to run, such as code, runtime, system tools,
and system libraries.

Docker containers are based on an open standard that allows them to run on all major Linux distributions
and Windows Server 2016. They do not depend on any specific infrastructure, which facilitates multicloud
deployments.

The core of the Docker platform is the Docker engine. This in-host daemon provides a lightweight runtime
for a Docker environment. It takes the form of a daemon on a Linux operating system and a service on a
Windows Server operating system. You can use Docker client software to communicate with the Docker
engine to run commands that build, provision, and run Docker containers. The Docker engine guarantees
that the app always runs the same way, regardless of the host on which it is running.

In addition to the Docker engine, other core components of the Docker ecosystem include:

• Image. A read-only collection of files and execution parameters representing a containerized


workload. An image includes all dependency and configuration information that is necessary to
provision a container.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 7-5

• Container. A runtime instance of an image, consisting of the image, its execution environment, and a
standard set of instructions. Containers include a writeable but nonpersistent file system. You can stop
and restart containers, while retaining their settings and file system changes. However, removing a
container results in deletion of all of its content.

Note: To retain file system changes, you can mount a volume within a container to
persistent storage, such as a folder within the container’s host.

• Dockerfile. A text file that contains the commands to build a Docker image.

Docker toolbox
The Docker toolbox is a collection of Docker platform tools that developers can use to build, test, deploy,
and run Docker containers. These tools include:

• Docker client. This command shell–based management software allows you to create, start, and
administer containers.

• Docker Engine. This is a lightweight runtime environment for building and running Docker containers.

• Docker Compose. This tool enables you to build and run apps that consist of multiple containers.
• Docker Machine. This tool enables you to provision Docker hosts by installing the Docker Engine on a
target computer in your datacenter or at a cloud provider. Docker Machine also installs and
configures the Docker client so that it can communicate with the Docker Engine.

• Docker Registry. This is a repository of container images accessible via the Docker application
programming interface (API). Docker offers a public repository, known as Docker Hub, but you can
create your own private repository, referred to as Docker Trusted Registry.

• Kitematic. This graphical user interface–based tool simplifies working with Docker images and
containers.

You can download and install the Docker tools on various platforms, including Windows, Linux, and
Mac OS X.

Additional Reading: You can create and manage Docker containers by using the
PowerShell module for Docker. For more information about the PowerShell module for Docker,
refer to: “PowerShell for Docker” at: https://aka.ms/hrk0t9

Implementing Docker hosts in Azure


Azure offers several ways to configure Azure VMs
to include support for Docker containers:

• On Linux VMs, install the Docker engine, the


Docker client, and Docker Compose by using
the Custom Script Extension, Docker VM
extension, or cloud-init. You can use this
approach to install Docker on an existing
Azure VM. Alternatively, you can include it
when deploying a new Azure VM via an Azure
Resource Manager template or a command-
line script.
MCT USE ONLY. STUDENT USE PROHIBITED
7-6 Implementing containers in Azure

Note: At the time of authoring this content, the Azure Docker VM extension for Linux is
deprecated and is scheduled to be retired in November 2018.

• Deploy a Docker Azure VM based on images available from the Azure Marketplace, such as Windows
Server 2016 Datacenter with Containers or Docker on Ubuntu Server images. On Azure VMs based on
Windows Server 2016 Datacenter with Containers, the deployment process automatically adds the
Containers feature. Both images also contain all core Docker components.

• Use the Docker Machine Azure driver to deploy an Azure VM running Linux with support for Docker
containers. Docker Machine is a command-line tool that allows you to perform Docker-related
administrative tasks, including provisioning new Docker hosts. This tool includes support for
automatically installing the Docker engine while deploying Azure VMs. To perform such deployment,
you need to include the –driver azure parameter when running the docker-machine create
command. For example, the following command deploys a new Azure VM named dockervm1 in the
Azure subscription that you specify, creates an administrative user account named dockeruser, and
allows connectivity on TCP port 80. With the default settings, the VM has the Standard_A2 size, uses
the Canonical Ubuntu Server 16.04.0-LTS image, and resides in the West US region on an Azure
virtual network named Docker Machine and in a resource group named Docker Machine. A default
network security group associated with the network interface of the VM allows inbound connectivity
on TCP port 22, for Secure Shell (SSH) connections, and on TCP port 2376, for remote connections
from the Docker client. The command also generates self-signed certificates that secure subsequent
communication from the computer where you ran Docker Machine and stores the corresponding
private key in your user’s account profile:

docker-machine create -d azure \


--azure-ssh-user dockeruser \
--azure-subscription-id your_Azure_subscription_ID \
--azure-open-port 80 \
dockervm1

Additional Reading: You can modify the default settings described above by including
additional command-line parameters and assigning custom values to them. For example, to
deploy a different image, use the --azure-image parameter. For the full syntax of the docker-
machine create –d azure command, refer to: “Microsoft Azure” at: https://aka.ms/mrs5mc

Additional Reading: You can run Docker Machine on Windows, Linux, and Mac OS X
operating systems. For installation instructions and links to download locations, refer to: “Install
Docker Machine” at: https://aka.ms/rwfvoc

• Use the OneGet provider PowerShell module to install the Docker engine and Docker tools on a
Windows Server Azure VM by completing the following tasks from a Windows PowerShell console:

a. Install the Docker-Microsoft PackageManagement Provider from the PowerShell Gallery:

Install-Module –Name DockerMsftProvider –Repository PSGallery -Force

b. Install the latest version of the Docker package:

Install-Package –Name docker –ProviderName DockerMsftProvider -Force


MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 7-7

c. Restart the computer by running the following command:

Restart-Computer -Force

• Deploy an AKS cluster. This allows you to provision and manage multiple instances of Docker
containers residing on clustered Docker hosts. The second lesson of this module covers this approach
in detail.

Note: With the introduction of Ev3 and Dv3 Azure VM series, which include support for
nested virtualization, you can implement Hyper-V containers in Azure.

Deploying containers on Azure VMs


A common approach to deploying containers to
Azure VMs relies on the Docker client. You can
connect to a Docker host operating system within
an Azure VM to run the Docker client in the
following ways:

• A local or remote Docker Machine session

• A Remote Desktop Protocol (RDP) session to a


Windows Server 2016 VM running Windows
containers
• An SSH session on a Linux VM

For information about connecting to Azure VMs


via RDP and SSH, refer to Module 4, “Managing Azure VMs.” In this topic, you will learn about deploying
containers to an Azure VM running Linux by using Docker Machine from a Windows computer. Keep in
mind that Docker Machine is also available for the Linux and Mac OS operating systems.

By default, using Docker Machine to deploy a new Azure VM generates a self-signed certificate. You can
use this certificate to establish a secure SSH session to the Docker engine running on the Azure VM once
the provisioning process completes. The private key of the certificate resides in the local profile of your
user account. To simplify management of the remote Docker engine via the SSH session, you should
configure Docker-specific environment variables on your local Windows computer. To identify these
environment variables, run the following at the command prompt:

docker-machine env dockervm1

where dockervm1 is the name of the Azure VM that you deployed by running the docker-machine
create command. The above command should return output similar to the following:

SET DOCKER_TLS_VERIFY=”1”
SET DOCKER_HOST=”tcp://191.237.46.90:2376”
SET DOCKER_CERT_PATH=”C:\Users\Admin\.docker\dockervm1\certs”
SET DOCKER_MACHINE_NAME=”dockervm1”
@FOR /f "tokens=*" %i IN ('docker-machine env dockervm1) DO @%i

At this point, you can download and start a container on the Azure VM by running the following
command:

docker run -d -p 80:80 --restart=always container_name


MCT USE ONLY. STUDENT USE PROHIBITED
7-8 Implementing containers in Azure

This command automatically locates the container with the name container_name, configures it to be
accessible via port 80, initiates its execution in the detached mode, and ensures that the container always
restarts after it terminates, regardless of the exit status. In the detached mode, the command-prompt
session is not attached to the container process, so you can use it to run other commands. In the attached
mode, the command-prompt session displays any messages that the Docker container generates.

Additional Reading: For the full syntax of the docker run command, refer to: “docker run”
at: https://aka.ms/rnaxx2

Additional Reading: For more details regarding running containers on Azure VMs by using
Docker Machine, refer to: “How to use Docker Machine to create hosts in Azure“ at:
https://aka.ms/e373fj

The docker run command first attempts to locate the latest version of the container locally on the Docker
host. If it finds one, it checks its version against the Docker Hub at https://aka.ms/llyb6d. This is a central,
Docker-managed repository of Docker images available publicly via the internet. If there is no locally
cached matching container image or its version is out of date, the Docker engine automatically
downloads the latest version from the Docker Hub.
When you run docker run, you must specify an image from which to derive the container. The creator of
the image might have applied a number of default settings, including:

• Detached or attached mode


• Network settings

• Runtime constraints on CPU and memory


With docker run, you can add to or override the image defaults that were configured during image
creation. Additionally, you can override nearly all the defaults of the Docker runtime.

The Docker client includes other command-line options, including:

• docker images. This lists the images available on the local Docker host.
• docker stop. This stops a running container.

• docker rm. This removes an existing container.


The Docker client also includes tools for automating the creation of container images. Although you can
create container images manually, using an automated image-creation process provides many benefits,
including:

• The ability to store container images as code.

• The rapid and precise re-creation of container images for maintenance and upgrade purposes.

• Support for continuous integration.

Three Docker components drive this automation:

• Dockerfile. This text file contains the instructions needed to create a custom image from a base
image. These instructions include the identifier of the base image, commands to run during the
image creation process, and a command to run during provisioning of containers referencing the
image.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 7-9

Additional Reading: For information about the Dockerfile syntax, refer to: “Dockerfile
reference” at: http://aka.ms/wrccuy

• docker build. This Docker engine command references a Dockerfile to create an image.

Additional Reading: For more information on docker build, including a list of all the build
options, refer to: “docker build” at: http://aka.ms/u29exr

• docker commit. This command captures changes that you made to a container and creates a new
image that includes these changes.

Azure Container Instances


Another method of implementing containers in Azure relies on the Azure Container Instances service. This
service allows you to deploy individual containers without explicitly provisioning virtual machines to serve
as their hosts. To deploy a container, you must provide its name, its resource group, the Azure region
where it should reside, a Docker image and the corresponding operating system type, and its resources,
such as the number of CPU cores and amount of memory. You must also specify whether the container
instance will be accessible via a public IP address. For convenient access, you can assign a DNS label
corresponding to this IP address. To facilitate persistent storage, you can mount volumes within a file
system of Azure Container Instances to Azure Storage–resident file shares.

By default, each container of Azure Container Instances operates independently. However, it is possible to
create multicontainer groups that share the same host virtual machine and have access to the same
network and storage resources.

You can provision a container instance from the Azure portal in three different ways. You can use the
New-AzureRmContainerGroup PowerShell cmdlet, the az container create Azure CLI 2.0 command, or
an Azure Resource Manager template.

Additional Reading: At the time of authoring this content, Azure Container Instances is in
preview. For more information about its functionality, refer to: “Azure Container Instances
Documentation” at: https://aka.ms/qjr9w8

Demonstration: Installing a Docker host and containers on an Azure VM


In this demonstration, you will learn how to install a Docker host and containers on an Azure VM.
MCT USE ONLY. STUDENT USE PROHIBITED
7-10 Implementing containers in Azure

Creating multicontainer applications with Docker Compose


The Docker Compose tool allows you to define
and implement multicontainer applications. To
define an application consisting of multiple
containers, you use a Compose file, which
identifies all the containers, their parameters, and
their interdependencies. To implement the
application based on a Compose file, you run the
docker-compose up command.

Note: When using Docker Compose to


develop multicontainer applications, it is common
to include Dockerfile in the development process.
Dockerfiles contain definitions of individual images and facilitate their build. Docker Compose
files reference Dockerfiles by using the build command. This allows you to control the build and
assembly of images via a Compose file, which also defines associations between the resulting
containers.

Before you attempt to create multicontainer applications by using Docker Compose, verify its availability
by running the following command from a Windows command prompt or a Linux SSH session:

docker-compose --version

By default, Docker Compose is available on Azure VMs that you deploy from Azure Marketplace Docker
images or by using Docker Machine. If it is not present, you can follow the installation instructions
available on the Installing Compose page on GitHub at https://aka.ms/mjbwks.

Next, you need to create a docker-compose.yml file. The file format follows YAML (a recursive acronym
that stands for YAML Ain’t Markup Language) specifications. YAML is a data serialization language that is
a superset of the JavaScript Object Notation (JSON) file format. A docker-compose.yml file is a text file,
so you can create and modify it by using any text editor, such as Notepad on a Windows Server or vi on
Linux.

For example, the following file defines an application that consists of two containers. The first one hosts a
WordPress instance serving as a front end and the second one hosts a MariaDB SQL database serving as a
back end:

wordpress:
image: wordpress
links:
- db:mysql
ports:
- 80:80
db:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: <root password>

The links entry represents an association between the two containers. The docker-compose.yml file also
includes references to container images and deployment parameters, such as network ports via which the
front end will be available or secrets necessary to protect access to the back-end database.

To start the application in the detached mode, you run the following command:

docker-compose up -d
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 7-11

This will start both containers in the proper sequence. After the WordPress container is running, you will
be able to connect to it via TCP port 80 of the Azure VM, assuming that the port is not blocked by the
operating system firewall or an Azure network security group.

Additional Reading: For details about the Docker Compose syntax, refer to: “Compose file
version 3 reference” at: https://aka.ms/k44zyt

Additional Reading: For more information about Docker Compose, refer to: “Get started
with Docker and Compose to define and run a multi-container application in Azure” at:
https://aka.ms/dhn0yb

Implementing Azure Container Registry


You can implement your own private registry of
containers by using the Container Registry service.
This allows you create and maintain your own
collection of Docker container images, while
benefiting from the availability, performance, and
resiliency of the Azure platform.

You can create a container registry directly from


the Azure portal, by using Azure PowerShell or
Azure CLI, or via an Azure Resource Manager
template–based deployment. You will need to
assign a unique name in the azurecr.io Domain
Name System (DNS) namespace, and specify an
Azure subscription, an Azure region, and either an existing or a new resource group where the registry will
reside. You also will have to choose one of three registry stock keeping units (SKUs): Basic, Standard, and
Premium. This choice will affect performance and scaling capabilities but not functionality. For example,
all three SKUs support Azure Active Directory (Azure AD) authentication and webhook integration, which
sends notifications about Docker events to a custom Uniform Resource Identifier (URI).

In addition, you must decide which authentication and authorization model you will use. The most basic
approach involves the use of the Admin user account with two passwords. Having two passwords allows
you to regenerate one of them without affecting authentication attempts with the other. By default, the
account is disabled. You can enable it, which allows you to authenticate from the Docker client by
providing the unique registry name and one of the two passwords. The admin user has full permissions to
the registry. You should limit the use of the Admin user account to single-user scenarios. Otherwise,
multiple users will be using the same set of credentials, which is a problem in terms of auditing.

In multiuser scenarios, you should create one or more service principals in the Azure AD instance
associated with your Azure subscription and then assign them to your registry. At that point, you will be
able to authenticate when accessing the registry by using a service principal name and its password. In
addition, with this approach, you can implement Role-Based Access Control (RBAC) and assign a
predefined or custom role to the service principals that you created.
MCT USE ONLY. STUDENT USE PROHIBITED
7-12 Implementing containers in Azure

The following sequence of steps illustrates how to push images to and pull images from a container
registry named adatumregistry by using the Docker client:

• Log in to the registry from your local computer with the Docker client installed by using an Azure AD
service principal and its password:

docker login adatumregistry.azurecr.io –u xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx –p


Pa55w.rd1234

The value of the –u switch represents the ApplicationID property of the service principal and the
value of the –p switch represents the corresponding password.

• Use the docker pull command to download a public image from Docker Hub to the local computer
(image_name represents the name of the image):

docker pull image_name

• Next, use the docker tag command to create an alias of the image that you downloaded in the
previous step. The alias contains a fully qualified path to the registry, with an additional namespace
(optionally):

docker tag nginx adatumregistry.azurecr.io/lab/image_name

• To upload the newly tagged image to the container registry, run the docker push command:

docker push adatumregistry.azurecr.io/lab/image_name

• To download the newly uploaded image, run:

docker pull adatumregistry.azurecr.io/lab/image_name

• To run a container based on this image and make it accessible via port 8080 on the local computer,
use the docker run command in the following manner:

docker run –it --rm –p 8080:80 adatumregistry.azurecr.io/lab/image_name

• To remove the image from the container registry, run the docker rmi command:

docker rmi adatumregistry.azurecr.io/lab/image_name

Additional Reading: For information regarding managing Azure Container Registry by


using Azure PowerShell, refer to: “Quickstart: Create an Azure Container Registry using
PowerShell” at: https://aka.ms/H4m4o9

Additional Reading: For information regarding managing Azure Container Registry by


using Azure CLI, refer to: “az acr repository” at: https://aka.ms/Xuxqh3
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 7-13

Check Your Knowledge


Question

What is the default operating system image that Docker Machine deploys?

Select the correct answer.

Windows Server 2016

Ubuntu Server

Red Hat Enterprise Linux

SUSE Linux Enterprise Server

CoreOS Linux
MCT USE ONLY. STUDENT USE PROHIBITED
7-14 Implementing containers in Azure

Lab A: Implementing containers on Azure VMs


Scenario
Adatum Corporation plans to implement some of its applications as Docker containers on Azure VMs. To
optimize this implementation, you intend to combine multiple containers by using Docker Compose.
Adatum would also like to deploy its own private Docker registry in Azure to store containerized images.
Your task is to test the functionality of tools that facilitate deployment of Docker hosts and Docker
containers. You also need to evaluate Azure Container Registry.

Objectives
After completing this lab, you will be able to:

• Implement Docker hosts on Azure VMs.

• Deploy containers to Azure VMs.

• Deploy multicontainer applications with Docker Compose.

• Implement Azure Container Registry.

Note: The lab steps for this course change frequently due to updates to Microsoft Azure.
Microsoft Learning updates the lab steps frequently, so they are not available in this manual. Your
instructor will provide you with the lab documentation.

Estimated Time: 30 minutes

Virtual machine: 20533E-MIA-CL1


User name: Admin

Password: Pa55w.rd

Exercise 1: Implementing Docker hosts on Azure VMs


Scenario
To test the planned deployment, you must identify the methods that would allow you to deploy Docker
hosts to Azure VMs.

Exercise 2: Deploying containers to Azure VMs


Scenario
After deploying the Docker host VM, you intend to verify that the Docker host is operational. To
accomplish this, you want to run a sample containerized nginx web server, available from Docker Hub.

Exercise 3: Deploying multicontainer applications with Docker Compose


to Azure VMs
Scenario
You intend to implement some Adatum applications by using multiple containers. To accomplish this, you
will test the deployment of multicontainer images by using Docker Compose.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 7-15

Exercise 4: Implementing Azure Container Registry


Scenario
Now that you have successfully implemented a Docker host in an Azure VM and deployed containerized
images from Docker Hub, you want to test the setup and image deployment by using Container Registry.
In your tests, you will use a sample image available from Docker Hub. You will start by creating a
container registry. Next, you will download the sample image to your lab computer and upload it to the
newly created private registry. Finally, you will deploy the image from the private registry to the Docker
host in Azure VM.

Question: Which method would you use when deploying Docker hosts on Azure VMs?

Question: What authentication and authorization method do you intend to use when
implementing Azure Container Registry?
MCT USE ONLY. STUDENT USE PROHIBITED
7-16 Implementing containers in Azure

Lesson 2
Implementing Azure Container Service
Implementing individual containers allows you to optimize your existing Azure VM workloads by
minimizing the resources that they require and enhancing their portability. However, to facilitate
scalability and resiliency, you might need to run tens, hundreds, or even thousands of containers across
multiple container hosts. Accomplishing this requires a technology that simplifies the management of
container clusters. Azure Container Service provides this functionality by integrating with open-source
container orchestrators. In this lesson, you will learn about the features and implementation of Azure
Container Service.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe the functionality of Azure Container Service.


• Deploy and manage Azure Container Service (ACS) Docker Swarm clusters.

• Deploy and manage Azure Container Service (ACS) Kubernetes clusters.

• Deploy and manage Azure Container Service (ACS) DC/OS clusters.


• Deploy and manage Azure Container Service (AKS) clusters.

• Implement an AKS cluster.

Overview of container-clustering solutions in Azure


The core component of container-clustering
technologies is an orchestrator. An orchestrator
provides automated provisioning and
infrastructure maintenance capabilities necessary
for cluster operations. These cluster operations
include load balancing and horizontal scaling,
service discovery, self-healing, automated rollouts
and rollbacks, secret and configuration
management, authentication and authorization,
resource allocation, storage orchestration, batch
execution, and workload failover.

In this topic, you will learn about three container


orchestrators:

• Docker Swarm
• Kubernetes

• Mesosphere DC/OS–based Marathon


MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 7-17

Docker Swarm
Docker is the leader in containerization. It developed a standardized approach to packaging applications
into containers, running these containers on host computers, and providing management capabilities via
its API. However, its offering did not initially include orchestration of these containers across multiple
hosts. That changed with the introduction of Docker Swarm, a separate product that facilitated creation
and administration of clusters of Docker containers. Subsequently, Docker incorporated this functionality
directly into the Docker engine, starting with release 1.12, in the form of Swarm mode.

Note: ACS does not support the integrated Swarm mode but instead relies on legacy
standalone Swarm. In order to implement the Swarm mode cluster in Azure, you need to use the
ACS Engine available from GitHub or a Docker solution from the Azure Marketplace.

Additional Reading: For more information about the ACS Engine project, refer to:
“Azure/acs-engine” at: https://aka.ms/n70ubu

The primary advantage of Swarm mode is its support for the standard Docker API. This ensures a
consistent programming and command-line interface when managing individual containers and their
clusters. Such consistency considerably minimizes the learning curve when transitioning to container
orchestration.

Kubernetes
Google released Kubernetes in February 2015 to implement orchestration of Docker containers. It
included a product-specific programming and management interface but allowed for extensibility
through its modular architecture, enabling integration with third-party and open-source code. In March
2016, Google made Kubernetes open source and handed over its oversight to Cloud Native Computing
Foundation (CNCF), but continues to contribute to its development.

Since its introduction, Kubernetes experienced significant growth in popularity due to a number of
features it supports. It also extends its support to other containerization technologies, such as CoreOS rkt
runtime engine. However, without a managed offering like Azure Container Services, implementing
Kubernetes clusters requires advanced skills, different from those that you would use to manage
individual Docker containers.

Note: Kubernetes is available in a wide range of distributions and deployment options.

Mesosphere DC/OS–based Marathon


Mesosphere DC/OS is the most feature rich of the three container orchestrators presented here.
Companies such as Twitter, Apple, Yelp, Uber, and Netflix have adopted it for its ability to orchestrate tens
of thousands of nodes. However, as its name indicates, Mesosphere DC/OS is a datacenter operating
system, which supports orchestration of not only containers, but other workloads such as microservices,
big data, machine learning, and real-time analytics. One of its primary strengths is the ability to abstract
underlying private or public cloud resources and run different types of workloads on the same
infrastructure. It also allows you to manage each type of workload independently, accounting for their
individual requirements.

These capabilities result from a unique two-tier architecture. Its first tier relies on the Apache Mesos
distributed system kernel to oversee an underlying infrastructure and maintain isolation between different
types of workloads running on that infrastructure. The second tier consists of individual frameworks, each
handling a specific workload type. One of these frameworks is Marathon, which is responsible for
managing Docker containers.
MCT USE ONLY. STUDENT USE PROHIBITED
7-18 Implementing containers in Azure

Note: Marathon was one of the first products that provided orchestration for Docker
containers.

The primary strength of Mesosphere DC/OS is its maturity and well-proven ability to run mission-critical
applications on a very large scale.

Despite their differences, the three container orchestrators share a number of common features. In
particular, they all support separation between the management layer and the layer responsible for
hosting application containers. The management layer consists of master nodes, whose names vary
depending on the orchestrator. Containerized applications run on agent nodes, referred to also as
minions or workers. Each orchestrator also supports some form of load balancing, service discovery, which
helps locate containers that need to communicate with one another, and container scheduling. Container
scheduling automatically starts failed containers and rebalances them if the number of agent nodes
changes. All the orchestrators isolate their master and agent nodes, while facilitating direct
communication between containers running on the same or different hosts. In addition, each orchestrator
implements high availability, although the level of resiliency tends to increase with product maturity. For
example, Mesosphere supports rack awareness, which helps ensure that two instances of the same
containerized application are not running on the same physical hardware. The management interfaces of
each orchestrator product differ, with Docker limited to command-line tools and DC/OS providing a
feature-rich, web-based front end.

Azure Container Service (ACS)


The initial implementation of Azure Container Service (referred to as ACS) provided integration with
Docker Swarm, Kubernetes, and Mesosphere DC/OS–based Marathon. As part of the provisioning of an
ACS cluster, you must choose one of these orchestrators. You will interact with the resulting ACS cluster
by using the management tools and programming interfaces of the selected orchestrator. For example,
once you deploy a Docker Swarm cluster, you will interact with it by using the Docker client. Similarly,
ACS-based implementation of DC/OS supports the DC/OS command-line interface (CLI) and management
of Kubernetes is available via the kubectl command-line utility. ACS integration provides ease of
provisioning and optimized configuration. You can provision an ACS cluster directly from the Azure
portal, by using an Azure Resource Manager template, or via Azure CLI.

Azure Container Service (AKS)


While the original ACS implementation considerably simplified cluster provisioning, it did not provide a
fully managed solution. You were still responsible for many maintenance tasks, including applying
operating system updates and scaling of underlying cluster nodes. To address this limitation and to
prioritize its development efforts, Microsoft decided to shift from its multipronged approach to a focus on
Kubernetes-based integration. This decision reflected the steadily increasing popularity of Kubernetes and
the containerization market trend, with a growing number of managed Kubernetes offerings. To
emphasize the change in strategy, Microsoft branded the new Azure Container Service offering as AKS.

Note: At the time of authoring this content, AKS is in preview. While ACS currently remains
a fully supported Azure service, Microsoft plans to deprecate it once AKS reaches general
availability. At that point, customers will have a 12-month period to migrate their ACS
deployments to AKS.

Note: ACS and AKS simplify provisioning of clusters by leveraging features of the Azure
platform. For example, both automatically implement an Azure load balancer to provide
connectivity to containerized applications.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 7-19

Note: Docker, Kubernetes, and Mesosphere DC/OS are not the only container orchestration
technologies available in Azure. For example, you can deploy the Deis PaaS-based solution for
running clusters of containerized applications or implement the CoreOS-based rkt container
system.

Creating and managing an ACS Docker Swarm cluster


You can use ACS to implement Docker Swarm by
performing these tasks:

1. Creating a Swarm cluster by using ACS.

2. Connecting to the Swarm cluster.

3. Deploying containers to the Swarm cluster.

Creating a Docker Swarm cluster by


using ACS
You can complete this task by using several
methods, including the Azure portal, an Azure
Resource Manager template, Azure CLI 2.0, or ACS
APIs. This topic will describe the first of these methods. Before you start, make sure that you have created
the following:

• An Azure subscription where you intend to deploy the cluster.

• An SSH RSA key pair that you will use to authenticate against ACS cluster nodes.

Additional Reading: For instructions regarding generating SSH RSA keys on a Windows
computer, refer to: “How to Use SSH keys with Windows on Azure” at: https://aka.ms/hhh8pq
For equivalent instructions applicable to Linux and Mac OS X computers, refer to: “How to create
and use an SSH public and private key pair for Linux VMs in Azure” at: https://aka.ms/csgnqn

Next, use the following procedure to create an ACS Docker Swarm cluster:

1. In the Azure portal, click Create a resource.

2. On the New blade, in the Search the Marketplace text box, type Azure Container Services.

3. On the Everything blade, click Azure Container Service.

4. On the Azure Container Service blade, click Create.


5. On the Basics blade, in the Name text box, type a unique name of the ACS cluster that you want to
create, select the target Azure subscription, create a new resource group or select an existing one,
and then choose the target Azure region where the cluster will reside. Click OK.
6. On the Master configuration blade, in the Orchestrator drop-down list, select Swarm.

7. In the DNS name prefix text box, provide a unique name that will be part of the fully qualified
domain name (FQDN) of the cluster master. The FQDN will take the form
prefixmgmt.location.cloudapp.azure.com, where location represents the Azure region you chose in
step 5.

8. In the User name text box, type the name of the Administrator account of the ACS cluster nodes that
will host the Docker containers.
MCT USE ONLY. STUDENT USE PROHIBITED
7-20 Implementing containers in Azure

9. In the SSH public keys text box, paste the SSH RSA public key that you generated earlier.

10. In the Master count dialog box, type the number of master nodes in the cluster.

11. Select or clear VM diagnostics.

12. Click OK.

13. On the Agent configuration blade, in the Agent count text box, type the number of agent nodes.

14. Click Agent virtual machine size, on the Choose a size blade, click the Azure VM size you want to
use for the agent nodes, click Select, and then click OK.

15. On the Summary blade, click OK to start the deployment.

Additional Reading: For information about creating a Swarm cluster in ACS via Azure CLI
2.0, refer to: “Deploy a Docker container hosting solution using the Azure CLI 2.0” at:
https://aka.ms/ws4qpr

Connecting to a Swarm cluster


After the deployment completes, you can connect to the load balancer in front of the master node tier by
using its DNS name, in the format prefixmgmt.location.cloudapp.azure.com, where location represents the
Azure region hosting the cluster. To establish a connection, use the following steps:

1. To identify the DNS name, go to the cluster blade in the Azure portal, and then copy the value of the
MasterFQDN entry in the Overview section of the cluster blade.
2. Use the ssh command-line tool to establish an SSH tunnel-based connection to the first master node
by running the following command:

ssh -L 2375:localhost:2375 -p 2200 demouser@<MasterFQDN> -i <privateKeyfile>

<MasterFQDN> is the value that you copied from the Azure portal in step 1 and <privateKeyfile> is
the full path to the file containing the private key corresponding to the public key that you provided
during cluster deployment.
3. To eliminate the need to specify the target socket when running Docker client commands, set the
value of the DOCKER_HOST environment variable to the value referenced above by running the
following command:

export DOCKER_HOST=:2375

Additional Reading: The ssh tool is part of Git for Windows, which is available at:
https://aka.ms/u48oog. Alternatively, you can connect to a master node via an SSH tunnel by
using the PuTTY tool. For details about this procedure, refer to: “Make a remote connection to a
Kubernetes, DC/OS, or Docker Swarm cluster” at: https://aka.ms/nzlg31

Deploying containers to a Swarm cluster


Once you establish an SSH tunnel to a Swarm cluster, you can manage it by using the Docker client. For
example, to deploy a new container, you can use the docker run command, as in the following example:

docker run -d -p 80:80 nginx


MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 7-21

To deploy multiple containers, you can rerun the same command multiple times. Swarm will automatically
distribute them across the agent nodes. To determine their distribution, you can use the docker ps
command and view the entries in the NAMES column in the resulting output. Deploying multicontainer
applications is the same as the Docker Compose–based procedure that you followed in the first lesson of
this module.

Architecture of a Docker Swarm–based ACS cluster


When you provision a Docker Swarm–based ASC cluster, the Azure platform automatically creates several
additional resources. These resources include a VM scale set containing agent nodes and a master
availability set containing master Azure VMs and master and agent load balancers along with their
respective public IP addresses.

Note: All these resources are part of the automatically generated resource group, whose
name starts with the name of the resource group that you specified when creating the Docker
Swarm cluster.

The agent load balancer handles distribution of incoming traffic across agent nodes and containers
running within them. If you intend to make your containerized applications available via ports other than
the ones predefined as part of the load-balancer configuration, you must modify the load-balancing rules.

Additional Reading: For more information about container management with Docker
Swarm, refer to: “Container management with Docker Swarm” at: https://aka.ms/jtkhxc

Creating and managing an ACS Kubernetes cluster


You can use ACS to implement Kubernetes by
performing these tasks:

1. Creating a Kubernetes cluster by using ACS.


2. Connecting to the Kubernetes cluster.

3. Deploying containers to the Kubernetes


cluster.

Creating a Kubernetes cluster by using


ACS
You can complete this task by using the Azure
portal, an Azure Resource Manager template, or
Azure CLI 2.0. Alternatively, you can use the open-source GitHub project ACS Engine (mentioned earlier in
this lesson) to define the cluster, and then deploy it by using Azure CLI 2.0.
This topic will describe how to use the Azure portal to create a Kubernetes cluster. Before you start, make
sure that you have created the following:

• An Azure subscription where you intend to deploy the cluster.

• An SSH RSA public key that you will use to authenticate against ACS VMs.
MCT USE ONLY. STUDENT USE PROHIBITED
7-22 Implementing containers in Azure

• An Azure AD service principal client ID and the corresponding secret. The service principal is
necessary to allow the cluster to dynamically manage Azure resources that are part of cluster
networking infrastructure, including user-defined routes and Azure load balancers. To create the
service principal by using Azure CLI 2.0, use the following steps:

a. Authenticate to your Azure subscription:

az login

b. If there are multiple subscriptions associated with your credentials, select the target subscription:

az account set --subscription <subscription ID>

<subscription ID> is the ID of the target subscription.

c. Create a resource group that will contain cluster networking infrastructure resources:

az group create –n <resource group name> -l <Azure region>

<resource group name> is the name of the resource group and <Azure region> is the Azure
region where your cluster will reside.

d. Create a service principal in the Azure AD tenant associated with your Azure subscription and
assign the Contributor role to it, with the scope set to the newly created resource group:

az ad sp create-for-rbac --role=”Contributor” --
scopes=”/subscriptions/<subscription ID>/resourceGroups/<resource group name>

This will return several attributes of the service principal, including appId and password. You will
use their values when creating the cluster.

Additional Reading: For instructions about generating SSH RSA keys on Windows and
Linux computers, refer to the information provided in the second topic of this lesson. For more
information about setting up an Azure AD service principal for a Kubernetes cluster when using
ACS, refer to: “Set up an Azure AD service principal for a Kubernetes cluster in Container Service”
at: https://aka.ms/yi5qri

Next, use the following procedure to create an ACS Kubernetes cluster:

1. In the Azure portal, click Create a resource.

2. On the New blade, in the Search the Marketplace text box, type Azure Container Services.

3. On the Everything blade, click Azure Container Service.

4. On the Azure Container Service blade, click Create.

5. On the Basics blade, in the Name text box, type a unique name of the ACS cluster that you want to
create, select the target Azure subscription, select the resource group that you created earlier, and
then choose the target Azure region where the cluster will reside. Click OK.

6. On the Master configuration blade, in the Orchestrator drop-down list, select Kubernetes.
7. In the DNS name prefix text box, provide a unique name that will be part of the cluster master’s
FQDN. The FQDN will take the form prefixmgmt.location.cloudapp.azure.com, where location
represents the Azure region that you chose in step 5.

8. In the User name text box, type the name of the Administrator account of the ACS VMs that will host
Docker containers.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 7-23

9. In the SSH public keys text box, paste the SSH RSA public key that you generated earlier.

10. In the Service principal client ID text box, type the value of the appID attribute that displayed in the
output of the az ad sp create-for-rbac command that you ran earlier.

11. In the Service principal client secret text box, type the value of the password attribute that
displayed in the output of the az ad sp create-for-rbac command that you ran earlier.

12. In the Master count dialog box, type the number of master nodes in the cluster.

13. Click OK.

14. On the Agent configuration blade, in the Agent count text box, type the number of agent nodes.

15. Click Agent virtual machine size, on the Choose a size blade, click the Azure VM size that you want
to use for the agent nodes, and then click Select.

16. In the Operating system drop-down list, select either the Linux or Windows operating system.

Note: At the time of authoring this content, Windows-based deployment is in preview.

17. Click OK.

18. On the Summary blade, click OK to start the deployment.

Additional Reading: For information about creating a Kubernetes cluster in ACS by using
Azure CLI 2.0, refer to: “Deploy Kubernetes cluster for Linux containers” at: https://aka.ms/toica5

Connecting to a Kubernetes cluster in ACS


Once the deployment completes, connect to the cluster by using the Kubernetes command-line client
kubectl, following these steps:

1. If necessary, start by installing Azure CLI 2.0. Follow with the installation of kubectl by running the
following command at a command prompt:

az acs kubernetes install-cli

Alternatively, you can use Azure Cloud Shell, which has both Azure CLI 2.0 and kubectl preinstalled.

2. Next, retrieve the credentials necessary to authenticate successfully to the target cluster:

az acs kubernetes get-credentials --resource-group=myResourceGroup --


name=myK8sCluster --ssh-key-file <privateKeyfile>

<privateKeyfile> is the full path to the file containing the private key corresponding to the public key
you provided during cluster deployment.

3. To verify that the connection was successful, you can list the cluster nodes by running the following
command:

kubectl get nodes

You might need to reference kubectl.exe by its full path if its current location is not referenced in the
PATH system environment variable.
MCT USE ONLY. STUDENT USE PROHIBITED
7-24 Implementing containers in Azure

Deploying applications to a Kubernetes cluster


Deploying containerized applications to a Kubernetes cluster requires the usage of YAML-formatted
manifest files. A manifest file describes a desired cluster state, including container images that should be
running on its agent nodes. The following illustrates a sample manifest file:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
template:
metadata:
labels:
app: azure-vote-back
spec:
containers:
- name: azure-vote-back
image: redis
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 1
template:
metadata:
labels:
app: azure-vote-front
spec:
containers:
- name: azure-vote-front
image: microsoft/azure-vote-front:redis-v1
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 7-25

To apply the manifest file to the cluster, save it to a text file, and then run the kubectl create command
with the –f parameter followed by the file name. To monitor the progress of a deployment, you can use
the kubectl get service command referencing the name of the containers, followed by the –watch
parameter. For example, with the sample YAML file listed above, you would run:

kubectl get service azure-vote-front --watch

This command would periodically display the status of the containers, including their external IP
addresses. Once an IP address becomes available, you will be able to connect to it from the internet.

Architecture of a Kubernetes-based ACS cluster


Just as Docker Swarm, when you provision a Kubernetes-based ASC cluster, the Azure platform
automatically creates a number of additional resources. The primary difference in this case is that
Kubernetes does not use Azure VM scale sets for agent nodes, which affects its autoscaling capabilities. Its
network configuration is more complex, relying on user-defined routes to facilitate resilient
communication between master and agent nodes. However, ACS and Kubernetes automatically handle
details of this configuration, so this extra complexity does not increase management overhead.

Note: All these resources are part of the same resource group, to which you deployed the
Kubernetes-based ACS cluster.

Additional Reading: For more information about container management with Kubernetes,
refer to: “Deploy Kubernetes cluster for Linux containers” at: https://aka.ms/toica5

Creating and managing an ACS DC/OS cluster


You can use ACS to implement Marathon of
Mesosphere DC/OS by performing these tasks:

1. Creating a DC/OS cluster by using ACS.

2. Connecting to the DC/OS cluster.

3. Deploying containers to the DC/OS cluster.

Creating a DC/OS cluster by using ACS


You can complete this task by using several
methods, including the Azure portal, an Azure
Resource Manager template, Azure CLI 2.0, or ACS
APIs. This topic will describe the first of these
methods. Before you start, make sure that you have created the following:
• An Azure subscription where you intend to deploy the cluster.

• An SSH RSA public key that you will use to authenticate against ACS VMs.

Note: For instructions about generating SSH RSA keys on Windows and Linux computers,
refer to the information provided in the second topic of this lesson.
MCT USE ONLY. STUDENT USE PROHIBITED
7-26 Implementing containers in Azure

Next, use the following procedure to create a DC/OS cluster:

1. In the Azure portal, click Create a resource.

2. On the New blade, in the Search the Marketplace text box, type Azure Container Services.

3. On the Everything blade, click Azure Container Service.

4. On the Azure Container Service blade, click Create.

5. On the Basics blade, in the Name text box, type a unique name of the ACS cluster that you want to
create, select the target Azure subscription, create a new resource group or select an existing one,
and then choose the target Azure region where the cluster will reside. Click OK.

6. On the Master configuration blade, in the Orchestrator drop-down list, select DC/OS.

7. In the DNS name prefix text box, provide a unique name that will be part of the cluster master’s
FQDN. The FQDN will take the form prefixmgmt.location.cloudapp.azure.com, where location
represents the Azure region that you chose in step 5.

8. In the User name text box, type the name of the Administrator account of the ACS VMs that will host
Docker containers.
9. In the SSH public keys text box, paste the SSH RSA public key that you generated earlier.

10. In the Master count dialog box, type the number of master nodes in the cluster.

11. Select or clear VM diagnostics.

12. Click OK.

13. On the Agent configuration blade, in the Agent count text box, type the number of agent nodes.

14. Click Agent virtual machine size, on the Choose a size blade, click the Azure VM size you want to
use for the agent nodes, click Select, and then click OK.

15. On the Summary blade, click OK to start the deployment.

Additional Reading: For information about creating a DC/OS cluster in ACS by using
Azure CLI 2.0, refer to: “Deploy a DC/OS cluster” at: https://aka.ms/wyod2m

Connecting to a DC/OS cluster


After the deployment completes, you can connect to the load balancer in front of the master node tier by
using its DNS name, in the format prefixmgmt.location.cloudapp.azure.com, where location represents the
Azure region hosting the cluster. To establish a connection, use the following steps:

1. To identify the DNS name, go to the cluster blade in the Azure portal, and then copy the value of the
MasterFQDN entry in the Overview section.
2. Use the ssh command-line tool to establish an SSH tunnel-based connection to the first master node
by running the following command:

ssh -L 80:localhost:80 -p 2200 demouser@<MasterFQDN> -i <privateKeyfile>

<MasterFQDN> is the value that you copied from the Azure portal in step 1 and <privateKeyfile> is
the full path to the file containing the private key corresponding to the public key that you provided
during cluster deployment.

Note: For instructions about using SSH on Windows, refer to the earlier topics of this lesson.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 7-27

3. After you are connected, you can use a web browser to go to http://localhost, which will display the
DC/OS portal. This allows you to view and manage cluster configuration and resources.

4. To manage the cluster via command line, install DC/OS CLI. If necessary, install Azure CLI 2.0, and
then run the following command at the command prompt:

az acs dcos install-cli

5. Next, configure the dcos tool to use the existing SSH tunnel by running:

dcos config set core.dcos_url http://localhost

Deploying containers to a DC/OS cluster


Deploying containerized applications to a DC/OS cluster requires configuring Marathon, which serves as
the container orchestrator. You control this configuration by using YAML-formatted files. The following
listing illustrates a sample configuration file that deploys a single instance of a Docker container based on
the nginx image, makes it available from the internet on port 80, and then allocates CPU, memory, and
disk resources to it:

{
"id": "nginx-demo",
"cmd": null,
"cpus": 1,
"mem": 32,
"disk": 0,
"instances": 1,
"container": {
"docker": {
"image": "nginx",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80,
"protocol": "tcp",
"name": "80",
"labels": null
}
]
},
"type": "DOCKER"
},
"acceptedResourceRoles": [
"slave_public"
]
}

To apply the manifest file to the cluster, save it to a text file and then run the dcos marathon app add
command followed by the file name. To monitor the progress of a deployment, you can use the dcos
marathon app list command. The command displays the status of the containerized applications. After
the value of the WAITING column for the application in the command output displays True, you will be
able to connect to it from the internet. To identify the external IP address, switch to the cluster blade in
the Azure portal, and then copy the value of the entry in the FQDN column in the row displaying the
agentpool configuration.

You can also view the status of the application in the DC/OS portal by navigating to the Services node.
MCT USE ONLY. STUDENT USE PROHIBITED
7-28 Implementing containers in Azure

Architecture of a DC/OS-based ACS cluster


When you provision a DC/OS-based ASC cluster, the Azure platform automatically creates several
additional resources, including a VM scale set containing private and public agents and a master
availability set containing master Azure VMs and master and agent load balancers along with their
respective public IP addresses.

Note: All these resources are part of an automatically generated resource group, whose
name starts with the name of the resource group that you specified when creating the DC/OS
cluster.

The public agent load balancer handles distribution of incoming traffic across public agent nodes and
containers running within them. If you intend to make your containerized applications available via ports
other than the ones predefined as part of the load balancer configuration, you must modify the load-
balancing rules.

Additional Reading: For more information about container management with DC/OS,
refer to: “Deploy a DC/OS cluster” at: https://aka.ms/wyod2m

Creating and managing an AKS cluster


You can implement a multicontainer AKS-based
deployment by performing these tasks:

1. Creating an AKS cluster.

2. Connecting to the AKS cluster.


3. Deploying containers to the AKS cluster.

Creating an AKS cluster


You can complete this task by using the Azure
portal, an Azure Resource Manager template, or
Azure CLI 2.0. Alternatively, you can use the open-
source GitHub project named acs-engine to
define the cluster, and then deploy it by using Azure CLI 2.0.

Additional Reading: To find out more information about the acs-engine project, refer to:
“Azure/acs-engine” at: https://aka.ms/n70ubu

This topic will describe how to use the Azure portal to create an AKS cluster. Before you start, make sure
that you have created the following:

• An Azure subscription where you intend to deploy the cluster.

• An SSH RSA public key that you will use to authenticate against AKS cluster nodes.

Additional Reading: For instructions about generating SSH RSA keys on Windows and
Linux computers, refer to the information provided in the second topic of this lesson.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 7-29

• An Azure AD service principal client ID and the corresponding secret. The service principal is
necessary to allow the cluster to dynamically manage Azure resources in the cluster-networking
infrastructure, including user-defined routes and Azure load balancers. To create the service principal
by using the Azure portal, follow these steps:
a. Sign in to Azure as the Service Administrator of the Azure subscription where you intend to
deploy the AKS cluster.

b. In the Azure portal, click Azure Active Directory.

c. On the Azure Active Directory blade, click App registrations, and then click New application
registration.

d. On the Create blade, specify the following, and then click Create:
 Name: any string of characters that will represent the service principal name
 Application type: Web app / API
 Sign-on URL: any valid URL
e. Once the application registration completes, on the registered app blade, note the value of the
Application ID. You will need to provide it when creating the AKS cluster.

f. On the registered app blade, click Settings, and then click Keys.

g. In the Password section, create a new entry by typing a descriptive name in the DESCRIPTION
column, selecting the password validity period in the EXPIRES column, and then clicking Save.

h. Copy the string that appears in the VALUE column. You will need to provide it when creating the
AKS cluster.

Next, use the following procedure to create an AKS Kubernetes cluster:

1. In the Azure portal, click Create a resource.

2. On the New blade, click Containers and then click Azure Container Service – AKS (preview).

3. On the Azure Container Service blade, click Create.

4. On the Basics blade, specify the following settings, and then click OK:

o Cluster name: a unique name of the AKS cluster that you want to create

o DNS prefix (optional): a DNS prefix that you want to include in the cluster name when
referencing Kubernetes API

o Kubernetes version: the version of Kubernetes that you want to implement in the cluster

o Subscription: the name of your Azure subscription where you provisioned the service principal in
the previous procedure

o Resource group: the name of an existing or new resource group that will host the master nodes
of the cluster

5. On the Configuration blade, specify the following settings, and then click OK:

o User name: the name of the administrator account of cluster nodes

o SSH public key: the public key of the SSH RSA key pair (you must also have the corresponding
private key to authenticate successfully)

o Service principal client ID: the Application ID of the service principal that you created earlier
o Service principal client secret: the password of the service principal
MCT USE ONLY. STUDENT USE PROHIBITED
7-30 Implementing containers in Azure

o Node count: the number of nodes that you want to provision in the cluster

o Node virtual machine size: the size of the virtual machines hosting cluster nodes

o OS disk size (optional): the size of the operating system disk of the virtual machines hosting
cluster nodes

6. On the Summary blade, once the validation completes successfully, click OK to start the deployment.
The deployment will create two resource groups. The first, which you specified on the Basics blade, will
host the managed master nodes that constitute the control plane. All remaining resources will reside in a
separate, autocreated resource group.

Note: When provisioning an AKS cluster, you do not specify the number of master nodes.
The Azure platform automatically adjusts the number of master nodes according to their
utilization levels.

Additional Reading: For information about creating an AKS cluster by using Azure CLI 2.0,
refer to: “Quickstart: Deploy an Azure Container Service (AKS) cluster” at: https://aka.ms/Hf8j85

Architecture of an AKS cluster


When you provision an AKS cluster, besides the resource group containing the managed container
service, the Azure platform automatically creates several additional resources. These include the agent
nodes in a separate resource group. The managed container service contains only fully managed master
nodes, to which you do not have direct access. The master nodes handle most cluster management tasks,
such as maintaining consistent configuration across all cluster nodes, health monitoring and self-healing,
service discovery, load balancing, and storage orchestration. The separate resource group contains virtual
machines hosting agent nodes, an availability set to which all the virtual machines belong, and their
managed disks. It also contains all networking components, including a virtual network, a route table, and
a network security group. The route table facilitates communication between master and agent nodes.

Connecting to a Kubernetes cluster in AKS


Once the deployment completes, connect to the cluster by using the Kubernetes command-line client
kubectl, following these steps:

1. If necessary, start by installing Azure CLI 2.0. Follow with the installation of kubectl by running the
following command at a command prompt:

az aks install-cli

Alternatively, you can use Azure Cloud Shell, which has both Azure CLI 2.0 and kubectl preinstalled.

2. Next, retrieve the credentials necessary to authenticate successfully to the target cluster:

az aks get-credentials --resource-group=<resource-group-name> --name=<cluster-name>

where <resource-group-name> designates the name of the resource group hosting the master nodes
of the cluster and <cluster-name> designates the name of the cluster that you provisioned.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 7-31

3. To verify that the connection was successful, you can list the cluster nodes by running the following
command:

kubectl get nodes

You might need to reference kubectl.exe by its full path if the PATH system environment variable
does not include its file system location.

Deploying applications to a Kubernetes cluster


Deploying containerized applications to a Kubernetes cluster requires the usage of YAML-formatted
manifest files. For a sample YAML file, refer to the third topic of this lesson.

To apply the manifest file to the cluster, save it to a text file, and then run the kubectl create command
with the –f parameter followed by the file name. To monitor the progress of a deployment, you can use
the kubectl get service command referencing the name of the containers, followed by the –watch
parameter. For example, with the sample YAML file listed above, you would run:

kubectl get service azure-vote-front --watch

This command would periodically display the status of the containers, including their external IP
addresses. Once an IP address becomes available, you will be able to connect to it from the internet.

Additional Reading: For more information about container management with Kubernetes,
refer to: “Deploy Kubernetes cluster for Linux containers” at: https://aka.ms/toica5

Demonstration: Creating an AKS cluster


In this demonstration, you will see how to implement an AKS cluster.

Check Your Knowledge


Question

What are the primary characteristics of Docker Swarm–based ACS deployments?

Select the correct answer.

Support for Docker APIs

YAML-based container deployments

Cluster management via a web-based interface

Cluster management via a command-line interface

Requirement to create an Azure AD service principal


MCT USE ONLY. STUDENT USE PROHIBITED
7-32 Implementing containers in Azure

Lab B: Implementing Azure Container Service (AKS)


Scenario
Adatum is considering implementing containers on a larger scale by leveraging the capabilities that AKS
offers. You want to test load balancing and scaling of a sample containerized application.

Objectives
After completing this lab, you will be able to:

• Create an AKS cluster.

• Manage the AKS cluster.

Note: The lab steps for this course change frequently due to updates to Microsoft Azure.
Microsoft Learning updates the lab steps frequently, so they are not available in this manual. Your
instructor will provide you with the lab documentation.

Estimated Time: 30 minutes


Virtual machine: 20533E-MIA-CL1

User name: Admin

Password: Pa55w.rd

Exercise 1: Creating an AKS cluster


Scenario
You must start by identifying the prerequisites for deploying an AKS cluster. You want to use Azure CLI for
cluster provisioning.

Exercise 2: Managing an AKS cluster


Scenario
With the new AKS cluster running, you must connect to it, deploy a sample containerized application in it,
and validate its availability and resiliency by testing clustering features such as scaling and load balancing.

Question: What deployment methodology would you choose when deploying AKS clusters?

Question: What are the primary advantages of using AKS for deploying container clusters?
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 7-33

Module Review and Takeaways


Review Question

Question: Which container orchestration approach would you implement in your


environment?
MCT USE ONLY. STUDENT USE PROHIBITED
MCT USE ONLY. STUDENT USE PROHIBITED
8-1

Module 8
Planning and implementing backup and disaster recovery
Contents:
Module Overview 8-1
Lesson 1: Planning for and implementing Azure Backup 8-3

Lesson 2: Overview of Azure Site Recovery 8-11

Lesson 3: Planning for Site Recovery 8-20


Lesson 4: Implementing Site Recovery with Azure as the disaster recovery site 8-29

Lab: Implementing Azure Backup and Azure Site Recovery 8-37

Module Review and Takeaways 8-38

Module Overview
Maintaining business continuity is one of the primary challenges of any organization that depends on
computing resources for its operations. Developing a business continuity plan involves identifying the
steps, which are necessary to recover from a disaster that significantly affects the availability of these
resources. When identifying these steps, there are two main factors to consider:

• Recovery Time Objective (RTO), which represents the acceptable amount of time it takes to restore
the original functionality of a production system

• Recovery Point Objective (RPO), which represents the acceptable amount of data loss following the
restore of a production system

The desired values of RTO and RPO differ, depending on factors such as the type and size of a business.
However, regardless of these differences, two most common means of facilitating business continuity
needs involve implementing a comprehensive backup and disaster recovery strategy. Microsoft Azure
offers dedicated services that not only considerably simplify both of these tasks but also minimize their
cost.

For example, a typical on-premises backup strategy involves the use of tapes, which require additional
infrastructure and off-site long-term storage. The traditional approach to implementing a disaster
recovery solution relies on an alternative physical location hosting standby computing resources. These
resources have to be continuously available in case the production site experiences an extensive outage.
This not only tends to be expensive but also results in increased management overhead. Azure Backup
and Azure Site Recovery help to address these challenges in an efficient and cost-effective manner by
minimizing the costs associated with long term storage, provisioning a disaster recovery site, and
automating the process of maintaining it.
In this module, you will find out about the different types of scenarios that Azure Backup and Azure Site
Recovery support. You will become familiar with the process of configuring backup in on-premises and
cloud environments. You will also learn about planning Azure Site Recovery deployments and step
through their implementations.
MCT USE ONLY. STUDENT USE PROHIBITED
8-2 Planning and implementing backup and disaster recovery

Objectives
After completing this module, you will be able to:

• Protect on-premises systems and Azure VMs by using Azure Backup.

• Describe Azure Site Recovery capabilities.

• Identify the factors that you must consider when planning for Site Recovery.
• Explain the high-level steps that are necessary to implement Site Recovery.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 8-3

Lesson 1
Planning for and implementing Azure Backup
Azure offers several different options that you can use to take advantage of its services for backup of on-
premises and cloud-based systems. Some Azure backup options integrate seamlessly with existing
Microsoft backup products, including built-in Windows Backup software and Microsoft System Center
2016 Data Protection Manager (DPM). Other options such as Azure VM-level backup or Microsoft Azure
Backup Server can enhance or even replace existing backup solutions. This lesson details characteristics
and functionality of various Azure backup options.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe the available Azure Backup options.

• Explain how to perform file, folder, and system state backups with the Azure Recovery Services Agent.
• Explain how to protect Azure VMs by using Azure VM extensions.

• Describe how to integrate Azure Backup with System Center 2016 Data Protection Manager and
Azure Backup Server.
• Implement and use Azure VM backup.

Demonstration: Preparing the lab environment


Perform the tasks in this demonstration to prepare the lab environment. The environment will be
configured while you progress through this module, learning about the Azure services that you will use in
the lab.

Important: The scripts used in this course might delete objects that you have in your
subscriptions. Therefore, you should complete this course by using a new Azure subscription. You
should also use a new Microsoft account that is not associated with any other Azure subscription.
This will eliminate the possibility of any potential confusion when running setup scripts.

This course relies on custom Azure PowerShell modules including Add-20533EEnvironment to prepare
the lab environment for labs and Remove-20533EEnvironment to perform clean-up tasks at the end of
the module.
MCT USE ONLY. STUDENT USE PROHIBITED
8-4 Planning and implementing backup and disaster recovery

Overview of Azure Backup


The Azure Backup service uses Azure resources
for short-term and long-term storage to
minimize or even eliminate the need for
maintaining physical backup media such as
tapes, hard drives, and DVDs. Since its
introduction, the service has evolved from its
original form, which relied exclusively on a
backup agent that was downloadable on the
Azure portal, into a much more diverse offering.
The Azure Backup service includes:

• A Windows 64-bit Server and Client file,


folder, and system state backups with the
Azure Recovery Services Agent, and the Online Backup integration module for Windows Server
Essentials.

• Long-term storage for backups with Data Protection Manager and Recovery Services Agent.
• Long-term storage for backups with Microsoft Azure Backup Server and Recovery Services Agent.

• Windows-based and Linux-based Azure VM-level backups with the Azure VM extensions
(VMSnapshot and VMSnapshotLinux, respectively).

Recovery Services vault


Regardless of the backup functionality that you intend to implement, to use Azure Backup to protect your
data, you must create a Recovery Services vault in Azure. A vault is the virtual destination of your backups,
which also contains configuration information about the systems that Azure Backup protects. To protect a
system, you must register it with a vault. The vault should reside in an Azure region that is close to the
physical location of the data, and in the case of Azure Infrastructure as a Service (IaaS) virtual machines, in
the same region.

Two resiliency options are available when creating an Azure Recovery Services vault: locally redundant and
geo-redundant. The first option leverages locally redundant Azure Storage consisting of three,
synchronously replicating copies of backed-up content in the same Azure region. The second option
leverages geo-redundant Azure Storage, including three additional copies in the paired Azure region,
providing an additional level of protection.

Note: You should set this option as soon as you create the vault, since will not be able to
change it once you register the first of your systems with the vault.

An Azure subscription can host up to 25 vaults. Each vault can protect up to 50 computers that run the
Azure Recovery Services Agent or the Online Backup integration module. Alternatively, if you back up
Azure IaaS virtual machines by relying on the Azure IaaS VM Backup extension, the vault can protect up to
200 computers.

Note that there is no limit on the amount of data in the vault for each protected computer. There also is
no limit on the maximum retention time of backed up content. However, there is a restriction on the size
of each data source: about 54,000 gigabytes (GB) for Windows 8, Windows Server 2012, and newer
operating systems. The maximum scheduled backup frequency depends on the backup approach, with up
to three backups per day with Windows Server and Client Recovery Services Agent, up to two backups
with Data Protection Manager or the Microsoft Azure Backup Server, and a single backup when using VM
extension–based setup.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 8-5

All backups are encrypted at the source with a passphrase that the customer chooses and maintains.
Azure Recovery Services Agent–based backups are also automatically compressed. Compression does not
apply to Azure VM extension–based backups. There are no additional charges for the traffic generated
during backup into Azure (ingress) and during restore out of Azure (egress).

Azure Backup offers several optional features that provide additional data protection, including:

• Retention of backups for 14 days following their deletion.

• A custom PIN which is required to modify an existing passphrase, or to stop protection and delete
backup data.

• Administrative email alerts triggered by such events as disabling or deleting backups.

These features are automatically enabled for all newly created vaults.

Note: Azure Backup relies on the same agent as Azure Site Recovery, which later topics in
this module will discuss. This is the reason for the references to the Azure Recovery Services
Agent in this lesson. Both Azure Backup and Azure Site Recovery also store data from systems
they protect by using an Azure Recovery Services vault. A single vault can simultaneously serve as
the repository for Azure Backup and Azure Site Recovery.

File, folder, and system state backups with the Recovery Services Agent
Azure Backup’s most basic functionality allows
you to protect folders and files on 64-bit
Windows Server and client operating systems,
both on-premises and in Azure. This functionality
relies on the Azure Recovery Services Agent,
which is available for download on the Azure
Recovery Services vault interface in the Azure
portal. You must install the agent on every
system that you want to protect, and you must
register it with the target vault.
To set up Recovery Services Agent –based
protection for an on-premises Windows
computer from the Azure portal, perform the following steps:

1. Create a Recovery Services vault.

2. Configure the Backup Infrastructure storage replication type, by choosing either the Locally-
redundant option or the Geo-redundant option on the Backup Configuration blade.

3. Specify Backup Goal settings, including the:

o Location of the workload: On-premises

o Workload type: Files and folders or System state

4. Download the vault credentials from the Prepare infrastructure blade of the Azure Recovery
Services vault. The Recovery Services Agent uses vault credentials to register with the vault during the
installation process.
5. Download the Recovery Services Agent from the Prepare infrastructure blade. Choose the
appropriate option for the system that you want to protect. In this case, you need to select the
Download Agent for Windows Server or Windows Client option.
MCT USE ONLY. STUDENT USE PROHIBITED
8-6 Planning and implementing backup and disaster recovery

6. Install the Recovery Services Agent and register it with the vault. When registering with the vault, you
specify a custom passphrase for encrypting backups.

7. Use the Azure Backup console to configure and schedule backups. After installing the agent, the new
console, whose interface closely matches the native Windows backup console, becomes available. This
allows you to select files and folders to back up and to schedule a backup directly to the Azure
Recovery Services vault. You can also use Azure PowerShell to configure and initiate backup
operations. After you schedule a backup, you also have the option to run an on-demand backup.

Note: If the computer that you want to protect contains a large amount of data and you
have limited bandwidth in your internet connection to Azure, consider using the Azure
Import/Export service to perform the initial backup. In this approach, you copy the data to back
up locally to a physical disk, encrypt it, and then ship the disk to the Azure datacenter where the
vault is located. Azure then restores the content directly to the vault, which allows you to perform
an incremental rather than full backup following the registration.

Additional Reading: For more information, refer to: “Back up a Windows Server or client
to Azure using the Resource Manager deployment model” at: http://aka.ms/Aabdfe

Azure VM-level backup by using Azure VM extensions


If the systems that you want to protect are
running the Windows or Linux operating systems
on Azure VMs, you can perform a VM-level
backup. This process uses the Azure VMSnapshot
(on Windows) or Azure VMSnapshotLinux (on
Linux) extension. A VM-level backup offers
application consistency for Windows virtual
machines. It also offers a higher limit for the
number of protected systems per vault, which is
200 Azure VMs instead of 50 protected systems
with the Recovery Services Agent. On the other
hand, the backup frequency in this case is limited
to once per day.

The restore process available from the Azure portal creates a new virtual machine. As a result, restoring
individual files or folders requires mounting a volume containing the backup within the operating system
of the same or different Azure VM. When you restore an entire Azure VM, the restore does not include
such VM-level settings as network configuration, which means that you must recreate them after the
restore. You can automate this task by using Azure PowerShell to perform a restore. This also allows you
to restore individual disks. You should use scripting when recovering Azure VMs that host Active Directory
Domain Services (AD DS) domain controllers or that have complicated network configuration. Such
configurations might include load balancing, multiple reserved IP addresses, or multiple network adapters.

Additional Reading: For details regarding the procedure describing restore of individual
folders and files when using Azure VM-level backup, refer to: “Recover files from Azure virtual
machine backup” at: https://aka.ms/Aq89z2
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 8-7

To set up an Azure IaaS VM-level backup with the Azure portal, follow these steps:

1. If you do not already have an available Recovery Services vault, create a new one. Note that the vault
must reside in the same Azure region as the Azure VMs.

2. Specify the vault’s storage replication type.

3. Specify Backup goal settings, including the:


o Location of the workload: Azure

o Workload type: Virtual machine

4. Choose the backup policy. The policy determines backup frequency and retention range. The default,
predefined policy triggers the backup daily at 3:00 PM and has the 30-day retention period. You can
create a custom policy to modify these values, by scheduling backup to take place on specific days
and setting the retention period on a daily, weekly, monthly, and yearly basis.
5. Specify the virtual machines to back up. The Azure portal will automatically detect the Azure VMs
which satisfy Azure VM–level backup requirements. When you click Items to backup on the Backup
blade, the Azure portal will display these virtual machines on the Select virtual machines blade. This
will automatically deploy the Azure VM backup extension to the virtual machines you that select and
register them with the vault.
6. At this point, you can identify the Azure VMs that are backed up to the vault by viewing the content
of the Backup Items blade.

Integrating Azure Backup with Data Protection Manager and Microsoft


Azure Backup Server
If your environment contains a large number of
systems that require protection, you might want
to consider implementing Microsoft Azure
Backup Server. Alternatively, if you have an
existing implementation of DPM, you will likely
benefit from integrating it with Azure Backup by
installing the Recovery Services Agent on the
DPM server.

These two methods yield almost equivalent


results. Microsoft Azure Backup Server provides
the same set of features as DPM, except support
for tape backups and integration with other
System Center products. Azure Backup Server also offers the same management interface as DPM. By
implementing Microsoft Azure Backup Server, you gain enterprise-grade protection without requiring
System Center licenses.

Note: At the time of writing, Azure Backup Server v2 is equivalent to System Center 2016
Data Protection Manager. It is a successor to Azure Backup Server v1, which used the same code
base as Data Protection Manager 2012 R2. The current version supports a number of new
features introduced in System Center 2016 Data Protection Manager, such as Modern Backup
Storage. Modern Backup Storage provides a number of benefits, including up to 50% more
efficient storage utilization and up to 3-times faster backup times. Azure Backup Server v2 is also
necessary to protect some of the latest workloads, including SQL Server 2016 and SharePoint
Server 2016.
MCT USE ONLY. STUDENT USE PROHIBITED
8-8 Planning and implementing backup and disaster recovery

With both of these products, you can provide recovery for Linux and Windows operating systems that run
on-premises or in Azure, as long as an Azure Backup Server or DPM server resides in the same location.
DPM and Azure Backup Server support consistent application backups of the most common Windows
server workloads, including Microsoft SQL Server, Office SharePoint Server, and Microsoft Exchange
Server. They also deliver superior efficiency and disk space savings because of built-in deduplication
capabilities.

It is important to remember that unlike the other Recovery Services Agent–based methods, neither DPM
nor Azure Backup Server can back up data directly to an Azure Recovery Services vault. Instead, they
operate as disk-to-disk-to-cloud solutions, using their local disks as the immediate backup target, and
afterward, copying data to Azure from the newly created backup.
To integrate System Center DPM with Azure Backup by using the Azure portal, you must perform the
following steps:

1. If you do not already have an available Recovery Services vault, create a new one.

Note: You can use the same vault for protecting Azure VMs with the Azure Backup VM
extension and systems that run the Recovery Services Agent, including System Center DPM.

2. Specify the vault’s storage replication type.


3. Specify Backup goal settings, including the:

o Location of the workload: On-premises


o Workload type: any combination of Hyper-V Virtual Machines, VMware Virtual Machines,
Microsoft SQL Server, Microsoft SharePoint, Microsoft Exchange, System State, or Bare
Metal Recovery

4. On the Prepare infrastructure blade of the Azure Recovery Services vault, select the Already using
System Center Data Protection Manager or any other System Center product check box.

5. Download the vault credentials from the Prepare infrastructure blade. The Recovery Services Agent
uses vault credentials to register with the vault during the installation process.
6. Download and install the Recovery Services Agent from the Prepare infrastructure blade. Start by
clicking the Download link. Once the download completes, run the installation and register the local
computer running System Center Data Protection Manager with the vault. As part of the registration,
designate a passphrase for encrypting backups.

7. From the Protection workspace of the DPM Administrator Console, create a new protection group
or modify an existing one. Within the protection group settings, enable the Online Protection
option.

Note: You must enable short-term protection by using local disks. While you cannot use
tapes for this purpose, you can additionally enable long-term protection to tape. As part of the
protection group configuration, specify an online backup schedule, online protection data, online
retention policy, and initial online backup methodology. Similar to the Azure Backup consoles,
you can choose between performing initial backup over the internet and using the Azure
Import/Export service to copy it offline.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 8-9

To deploy Microsoft Azure Backup Server by using the Azure portal, perform the following steps:

1. If you do not already have an existing, available Recovery Services vault, create a new one.

Note: You can use the same vault for protecting Azure VMs with the Azure Backup VM
extension and systems that run the Recovery Services Agent, including System Center DPM.

2. Specify the vault’s storage replication type.

3. Specify Backup goal settings, including the:

o Location of the workload: On-premises


o Workload type: any combination of Hyper-V Virtual Machines, VMware Virtual Machines,
Microsoft SQL Server, Microsoft SharePoint, Microsoft Exchange, System State, or Bare
Metal Recovery
4. On the Prepare infrastructure blade of the Azure Recovery Services vault, make sure that the
Already using System Center Data Protection Manager or any other System Center product
check box is cleared.

5. Use the Download link on the Prepare infrastructure blade to download the Microsoft Azure
Backup Server installation media, which are over 3 GB in size.
6. Download the vault credentials from the Prepare infrastructure blade. The Microsoft Azure Backup
Server setup uses vault credentials to register with the vault during the installation process.

7. Once the download of the Microsoft Azure Backup Server installation media completes, extract the
download package content by running MicrosoftAzureBackupInstaller.exe, and then start the
setup process.

Note: Azure Backup Server requires a local instance of SQL Server. You have the option of
using the SQL Server installation media in the package or deploying an instance prior to running
the setup.

8. When prompted, provide the path to the vault credentials that you downloaded earlier. When
registering the Microsoft Azure Backup Server with the vault, you must provide a passphrase for
encrypting backups.

9. Because Microsoft Azure Backup Server has the same administrative interface as the System Center
DPM, after the setup completes, the remaining configuration is the same as described above for
System Center DPM, with the exception of tape backup–related settings.

Demonstration: Implementing and using Azure VM backups


In this demonstration, you will see how to:

• Create a Recovery Services vault.

• Create a custom backup policy.

• Register an Azure VM in the Azure Recovery Services vault.

• Restore an individual file.


MCT USE ONLY. STUDENT USE PROHIBITED
8-10 Planning and implementing backup and disaster recovery

Check Your Knowledge


Question

You need to perform an application-level backup and restore of an Azure VM


running Windows. What solution should you use?

Select the correct answer.

Install the Recovery Services Agent on the virtual machine.

Install the Recovery Services Agent on a Microsoft System Center 2016 Data
Protection Manager (Data Protection Manager) server. Install the DPM agent
on the Azure VM.

Install Azure Backup Server. Install the DPM agent on the Azure VM.

Install the Azure VM Backup extension on the Azure VM.

Use the built-in Windows Backup feature.


MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 8-11

Lesson 2
Overview of Azure Site Recovery
In this lesson, you will learn how Site Recovery helps address business continuity and disaster recovery.
The lesson starts with an overview of the different scenarios that Site Recovery supports. The topics that
follow provide an architectural overview of every scenario, focusing on the components of Site Recovery.
The lesson concludes with a description of the capabilities of Site Recovery.

Lesson Objectives
After completing this lesson, you will be able to:

• Provide an overview of the different scenarios that Site Recovery supports.

• Describe the capabilities of Site Recovery.

• Explain the role of different Site Recovery components when using Azure as a disaster recovery site
for an on-premises Microsoft Hyper-V environment.
• Explain the role of different Site Recovery components when using Azure as a disaster recovery site
for an on-premises System Center Virtual Machine Manager environment.

• Explain the role of different Site Recovery components when using Azure as a disaster recovery site
for an on-premises environment consisting of physical servers and VMware-hosted virtual machines.

Overview of Site Recovery scenarios


Site Recovery is a disaster recovery and business
continuity service that provides two types of
functionality—replication and orchestration.
Replication synchronizes the content of the
operating systems and data disks between
physical or virtual machines in a primary site that
hosts your production workloads and virtual
machines in a secondary site. Orchestration
provides orderly failover and failback between
these two locations.

Azure Site Recovery provides support for the


following three disaster recovery scenarios,
depending on the location of the primary and secondary sites:

• Failover and failback between two on-premises sites.

• Failover and failback between an on-premises site and an Azure region.

• Failover and failback between two Azure regions.

Note: At the time of authoring this course, failback functionality between two Azure
regions is in preview.

In addition, you can use Site Recovery to migrate physical and virtual machines to an Azure region by
performing failover only. This capability is available for Linux and Windows operating system instances
running in on-premises locations, in Azure, or in the Amazon Web Services (AWS) environment.
MCT USE ONLY. STUDENT USE PROHIBITED
8-12 Planning and implementing backup and disaster recovery

Note: When hosting on-premises virtualized workloads on the VMware vCenter 6.5,
VMware vCenter 6.0, or VMware vCenter 5.5 platform, you should consider using Azure Migrate
to perform migration to Azure. For more information regarding this solution, refer to module 3
of this course.

Site Recovery allows you to protect both physical and virtual machines, including support for Hyper-V and
VMware ESXi virtualization platforms. How you implement this protection depends on several factors,
including the:

• Location of the recovery site (on-premises or in Azure).

• Type of computer to protect (physical or virtual).

• Virtualization platform (Hyper-V or VMware ESXi).

• Virtualization management software (Microsoft System Center Virtual Machine Manager [VMM] or
VMware vCenter).

• Replication mechanism (Azure Site Recovery Agent, Hyper-V replica, or the combination of Mobility
Service and process server specific to VMware VMs and physical servers).

Site Recovery deployments include the following:

• Disaster recovery of Hyper-V virtual machines managed by VMM from one on-premises location to
another with Hyper-V–based replication.
• Disaster recovery of Hyper-V virtual machines managed by VMM from an on-premises location to
Azure with Site Recovery–based replication.

• Disaster recovery of Hyper-V virtual machines not managed by VMM from an on-premises location to
Azure with Site Recovery–based replication.

• Disaster recovery of VMware virtual machines from one on-premises location to another with
Mobility Service–based replication.
• Disaster recovery of VMware virtual machines from an on-premises location to Azure with Mobility
Service–based replication.
• Disaster recovery of physical servers running Windows and Linux operating systems from an on-
premises location to Azure with Mobility Service–based replication.

• Disaster recovery of physical servers running Windows and Linux operating systems from one on-
premises location to another with Mobility Service–based replication.

• Disaster recovery of virtual machines from one Azure region to another with Site Recovery–based
replication.
• Migration of virtual machines from a non-Microsoft cloud-hosting provider to Azure with Mobility
Service–based replication.

Replication of Hyper-V virtual machines across two on-premises sites leverages Hyper-V Replica, a
component of the Hyper-V role of the Windows Server operating system. When replicating Hyper-V
virtual machines in cross-premises scenarios, Site Recovery utilizes the Azure Recovery Services Agent. The
agent is a Site Recovery component that you must install on Hyper-V servers that are hosting protected
virtual machines. For replication of physical servers and VMware virtual machines, Site Recovery relies on a
combination of the Mobility Service—a Site Recovery component that you must install directly on
computers that you want to protect—and one or more process servers.
Process servers function as replication gateways between one or more instances of Mobility Service and
storage in the secondary site. Process servers implement performance optimization and security tasks,
such as compression, caching, and encryption.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 8-13

Note: The process server is part of VMware-specific Azure Site Recovery infrastructure,
which also includes a configuration server and a master target server. The configuration server
coordinates communication between the on-premises environment and Azure in a production
environment. The master target server is responsible for coordinating communication and
replication during failback.

Note: Site Recovery supports the protection of physical computers with failover to Azure
virtual machines. However, there is no support for failback to physical computers. Instead, you
must fail back to VMware virtual machines.

Additional Reading: This module will focus on scenarios that rely on Azure as the disaster
recovery site. For details regarding scenarios where the disaster recovery site resides in another
on-premises location, refer to: “Support matrix for replication to a secondary site with Azure Site
Recovery” at: https://aka.ms/V8in6c

Site Recovery capabilities


Site Recovery provides several capabilities that
help you accomplish your business continuity
goals. These capabilities include support for:

• Storage replication. As the topic “Overview


of Site Recovery scenarios” explained briefly,
storage replication maintains the
synchronization of disks between your
production and disaster recovery computers.
Hyper-V Replica and the Azure Site Recovery
Services agent offer replication frequency in
30-second, 15-minute, or 30-minute
intervals. They also allow you to generate
application-consistent snapshots for individual VMs. With the Mobility Service, replication is
continuous. Both of these scenarios support application-consistent snapshots for individual VMs or
across groups of VMs.

Note: Multi-VM consistency requires that VMs in the replication group are able to
communicate with each other over port 20004.

• Orchestration of planned failover and failback. With planned failover and failback, orchestration
performs an orderly transition between your production and disaster recovery environments without
any data loss.

• Orchestration of unplanned failover and failback. In this case, orchestration performs a transition
between your production and disaster recovery environments, which, depending on the availability of
the primary site, might result in data loss.

• Orchestration of test failover. Test failover typically takes place in an isolated network, making it
possible to evaluate your disaster recovery implementation without affecting the production
environment.
MCT USE ONLY. STUDENT USE PROHIBITED
8-14 Planning and implementing backup and disaster recovery

Recovery plan
To implement failover and failback, you must create a recovery plan. A recovery plan identifies protected
physical machines and virtual machines, and dictates the order in which Site Recovery performs individual
steps during failover and failback. Recovery plans support Azure Automation scripts and workflows in
addition to manual steps. This provides sufficient flexibility for more complex disaster recovery scenarios
and helps you achieve your RTO.

Note: Module 11, “Implementing Azure-based management, monitoring, and automation”


covers Azure Automation in detail.

Site Recovery integrates with a wide range of applications, some of which support their own replication
technologies. How you implement an optimal disaster recovery solution depends on the application and
on whether the secondary site resides on-premises or in Azure. In general, these solutions utilize one of
the two approaches:

• Using application-specific replication technology to an online virtual machine in the secondary site,
either on-premises or in Azure.

• Using Azure Site Recovery–specific replication technology to an online virtual machine in an on-
premises secondary site or to a storage account in Azure.
With either approach, you can use Azure Site Recovery to facilitate a test failover and orchestration during
planned and unplanned failover and failback. The workloads that you can protect in this manner include:

• Active Directory domain controllers hosting the Domain Name System (DNS) server role
• SQL Server with support for AlwaysOn Availability Group and Failover Cluster instances

• Internet Information Services (IIS) web apps with SQL Server as their database backend

• System Center Operations Manager


• Microsoft SharePoint Server

• SAP

• Microsoft Exchange Server


• Remote Desktop Virtual Desktop Infrastructure (VDI)

• Microsoft Dynamics AX and Dynamics CRM

• Oracle
• Windows file servers

• Citrix XenApp and XenDesktop


MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 8-15

Site Recovery components: Hyper-V to Azure


You use several Site Recovery components when
protecting on-premises Hyper-V virtual machines
with Azure as the disaster recovery site. These
include Azure components and on-premises
components.

Azure components
The Azure components that you will use are:

• An Azure subscription that is hosting a Site


Recovery vault.

• A Site Recovery vault that is providing a


central management point of disaster
recovery–related replication and orchestration.

• An Azure general-purpose Standard storage account that is storing replicated data. You can
configure the storage account with either a locally redundant storage (LRS) or a geo-redundant
storage (GRS) setting. The storage account must reside in the same region as the Site Recovery vault.

• Optionally, an Azure Premium storage account, if you want to fail over your on-premises virtual
machines to Azure VMs with Premium storage disks. Note that, in this case, you still require a
Standard storage account, which hosts replication logs and tracks changes to on-premises virtual
machine disks. You can set the replication frequency in this case to either five minutes or 15 minutes.

• An Azure virtual network hosting virtual machines in your disaster recovery site. Site Recovery will
automatically provision these virtual machines during failover as part of the recovery plan you define.
The virtual network must also reside in the same region as the Site Recovery vault.

On-premises components
The on-premises components that you will use are:
• Protected Hyper-V virtual machines.

• A computer that is running Windows Server 2012 R2 or Windows Server 2016 and has the Hyper-V
server role hosting the virtual machines that you want to protect.

• Azure Site Recovery Provider and Azure Site Recovery Services agent running on each Hyper-V host
that contains protected Hyper-V virtual machines. The provider handles communication with the
Recovery Services vault. The agent is responsible for data replication.
MCT USE ONLY. STUDENT USE PROHIBITED
8-16 Planning and implementing backup and disaster recovery

Site Recovery components: VMM to Azure


You use several Site Recovery components when
protecting on-premises Hyper-V virtual machines
in VMM clouds with Azure as the disaster
recovery site. These include Azure components
and on-premises components.

Azure components
The Azure components that you will use are:

• A Microsoft Azure subscription that is


hosting a Site Recovery vault.

• A Site Recovery vault that is serving as the


central management point for disaster
recovery–related replication and orchestration. Site Recovery also hosts recovery plans.

• An Azure general-purpose Standard storage account that is storing replicated data. You can
configure the account with either an LRS or a GRS setting. The storage account must reside in the
same region as the Site Recovery vault.

• Optionally, an Azure Premium storage account, if you want to fail over your on-premises virtual
machines to Azure VMs with Premium storage disks. Note that, in this case, you still require a
Standard storage account, which hosts replication logs and tracks changes to on-premises virtual
machine disks. You can set the replication frequency in this case to either five minutes or 15 minutes.

• Optionally, Azure managed disks, if you want to benefit from the minimized management overhead
and increased resiliency that they offer. Even if you choose this option, Azure Site Recovery still relies
on a Standard storage account as the target of cross-premises replication. It dynamically creates
managed disks when it provisions Azure virtual machines during a failover.

Note: You can choose managed disks when using Azure Site Recovery to migrate to Azure
Hyper-V virtual machines that are not part of a VMM environment. However, at the time of
writing, there is no support for failback in this scenario.

• An Azure virtual network that is hosting virtual machines in your disaster recovery site. Site Recovery
will automatically provision these virtual machines during failover as part of the recovery plan that
you define. The virtual network must also reside in the same region as the Site Recovery vault.

On-premises components
The on-premises components that you will use are:

• Protected Hyper-V virtual machines.

• Computers running Windows Server 2012 R2 or Windows Server 2016 with the Hyper-V server role
hosting the virtual machines that you want to protect.

• A System Center 2012 R2 Virtual Machine Manager or System Center 2016 Virtual Machine Manager
server that is hosting one or more private clouds and logical networks.

• Virtual machine networks linked to logical networks associated with the VMM clouds. You must map
virtual machine networks to Azure networks when creating a recovery plan in Site Recovery vault.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 8-17

• The Azure Site Recovery Provider running on the VMM server. The provider handles communication
with the Recovery Services vault.

• The Azure Site Recovery Services agent running on each Hyper-V host that contains protected
Hyper-V virtual machines. The agent is responsible for data replication.

Site Recovery components: VMware and physical servers to Azure


You use several Site Recovery components when
protecting VMware virtual machines with Azure
as the disaster recovery site. These include Azure
components and on-premises components.

Azure components
The Azure components that you use are:

• A Microsoft Azure subscription that is


hosting a Site Recovery vault.

• A Site Recovery vault that is providing a


central management point for disaster
recovery–related replication and
orchestration.

• An Azure general-purpose Standard storage account that is storing replicated data. You can
configure the account with either an LRS or a GRS setting. The storage account must reside in the
same region as the Site Recovery vault. While the replication is continuous, the number of crash-
consistent and application-consistent recovery points depends on the replication policy that you
define. Standard storage accounts support retention of recovery points for up to 72 hours.
• Optionally, an Azure Premium storage account, if you want to fail over your on-premises virtual
machines to Azure VMs with Premium storage disks. In this case, you still require a Standard storage
account, which hosts replication logs, tracking changes to on-premises virtual disks. While the
replication is continuous, the number of crash-consistent and application-consistent recovery points
depends on the replication policy that you define. Premium storage accounts support retention of
recovery points for up to 24 hours.

• An Azure virtual network that is hosting virtual machines in your disaster recovery site. Site Recovery
will automatically provision these virtual machines during failover as part of the recovery plan that
you define. The virtual network must also reside in the same region as the Site Recovery vault.

• An Azure virtual machine that is hosting a process server. This component is required only during
failback, to replicate Azure virtual machines to on-premises VMware virtual machines.

On-premises components
The on-premises components that you use are:

• Protected VMware virtual machines and physical computers running the Mobility Service.

• VMware ESXi hosts that are hosting protected VMware virtual machines.

• A vCenter 6.5, vCenter 6.0, or vCenter 5.5 server that is providing centralized management of vSphere
hosts and their virtual machines.

• The Mobility Service that is running on all protected VMware virtual machines or physical servers. The
service handles Site Recovery–related on-premises communication. It also tracks changes to local
disks and continuously replicates them out.
MCT USE ONLY. STUDENT USE PROHIBITED
8-18 Planning and implementing backup and disaster recovery

• A vCenter user account with permissions to discover VMware VMs automatically and orchestrate
replication, failover, and failback.

• An operating system account for Windows and Linux VMs with sufficient permissions to install the
Mobility Service.

• A physical computer or a VMware virtual machine, referred to as the configuration server, which is
hosting the following Site Recovery components:

o Configuration server component. This component is responsible for communication between on-
premises, protected physical computers or virtual machines and Azure, including the
management of the data replication and recovery process.

o Process server component. This component operates as a replication gateway during normal
operations (outside of disaster recovery events). All replication data from the Mobility Service that
is running on the protected physical computers or virtual computers in the primary site flows via
the process server. The process server applies caching, encryption, and compression to secure
and optimize its transfer. The process server also handles discovery of VMware virtual machines
within the local vCenter environment and installation of the Mobility Service on these machines.

o Master target server component. This component performs data replication during failback from
Azure. It also runs the software component referred to as Unified agent, which facilitates
communication with the configuration server and the process server.

Cross-premises component
You also require a cross-premises component, which is a hybrid network connection between the on-
premises network and the Azure virtual network that is hosting virtual machines in your disaster recovery
site. The connection is necessary only during failback. During normal operations, replication traffic and
cross-premises communication with a Site Recovery vault flow via the internet by default, unless you
implement public peering via Azure ExpressRoute. This does not compromise the security of your
environment, because the configuration server and the process server always encrypt communication
traffic and replication data. You can implement this connection by using either site-to-site virtual private
network (VPN) or ExpressRoute.

Site Recovery components: Azure to Azure


You will use several Site Recovery components
when protecting Azure VMs with another Azure
region as the disaster recovery site. These include
Azure components only.

Azure components
The Azure components that you use are:
• A Microsoft Azure subscription that is
hosting a Site Recovery vault.
• A Site Recovery vault that is providing a
central management point for disaster
recovery–related replication and
orchestration. This vault should reside in the Azure region that will host the disaster recovery site.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 8-19

• An Azure general-purpose Standard storage account that is storing replicated data. You can
configure the account with either an LRS or a GRS setting. The storage account must reside in the
same region as the Site Recovery vault. While the replication is continuous, the number of crash-
consistent and application-consistent recovery points depends on the replication policy that you
define. Standard storage accounts support retention of recovery points for up to 72 hours.
• An Azure general-purpose Standard storage account that serves as a temporary cache of changes to
the source Azure VM. This storage account must reside in the same region as the source VM.
• Optionally, an Azure Premium storage account, if you want to fail over your on-premises virtual
machines to Azure VMs with Premium storage disks. In this case, you still require a Standard storage
account, which hosts replication logs, tracking changes to on-premises virtual machine disks. While
the replication is continuous, the number of crash-consistent and application-consistent recovery
points depends on the replication policy that you define. Premium storage accounts support
retention of recovery points for up to 24 hours.

• Optionally, Azure managed disks, if you want to benefit from the minimized management overhead
and increased resiliency that they offer. Even if you choose this option, Azure Site Recovery still relies
on a Standard storage account for caching and as the target of cross-premises replication. It
dynamically creates managed disks when it provisions Azure virtual machines during a failover.
• An Azure virtual network that is hosting virtual machines in your disaster recovery site. Site Recovery
will automatically provision these virtual machines during failover as part of the recovery plan that
you define. The virtual network must also reside in the same region as the Site Recovery vault.

Note: At the time of authoring this course, the use of managed disks and failback
functionality between two Azure regions is in preview.

Check Your Knowledge


Question

Which of the following scenarios does Site Recovery support? (Select all that
apply.)

Select the correct answer.

Failover and failback between on-premises physical computers running


Windows Server and Azure virtual machines

Failover and failback between on-premises physical Linux computers and


Azure virtual machines

Failover and failback between Hyper-V virtual machines across two on-
premises sites without using VMM

Failover and failback between on-premises Hyper-V virtual machines and


Azure virtual machines without using VMM

Migration of virtual machines running Windows Server from Amazon Web


Services to Azure
MCT USE ONLY. STUDENT USE PROHIBITED
8-20 Planning and implementing backup and disaster recovery

Lesson 3
Planning for Site Recovery
In this lesson, you will learn how to plan for Site Recovery in scenarios where the secondary site resides in
Azure. This planning should include factors such as the processing capacity of Azure virtual machines and
cross-premises network connectivity. In addition, you will learn how differences between the capabilities
of on-premises Hyper-V environments and the virtualization platform in Azure affect the planning of Site
Recovery deployments.

Lesson Objectives
After completing this lesson, you will be able to:

• Identify the primary considerations when planning for cross-premises Azure Site Recovery
implementations.

• Describe additional considerations for protecting Hyper-V workloads in Azure when you are not using
System Center Virtual Machine Manager (VMM).

• Describe additional considerations for protecting Hyper-V workloads in Azure when you are using
System Center VMM.
• Describe additional considerations for protecting VMware and physical server–based workloads.

Primary considerations in planning for cross-premises Site Recovery


deployments
The first factor to consider when planning for Site
Recovery is whether the disaster recovery site will
reside in an on-premises location or in Azure. In
addition, you must also take into account the
characteristics of your primary site, including:

• The location. You should ensure that the


secondary site is far enough from the
primary site that it will remain operational if
there is a region-wide disaster affecting the
availability of the primary site. On the other
hand, the secondary site should be relatively
close to the primary site to minimize the
latency of replication traffic and connectivity from the primary site.

• The existing virtualization platform. The architecture of your solution and its capabilities will depend
on whether you are using Hyper-V or ESXi, and whether you rely on VMM or vCenter to manage
virtualization hosts.
• The computers and workloads that you intend to protect. Your secondary site should provide a
sufficient amount of compute and storage resources to accommodate production workloads
following the failover.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 8-21

Capacity planning for Hyper-V and VMware replication to Azure


For cross-premises Azure Site Recovery scenarios that rely on Azure as the disaster recovery site, Microsoft
offers Azure Site Recovery Deployment Planner for Hyper-V and VMware. The planner allows you to
determine Azure Site Recovery network, compute, and storage requirements by providing the following
information:
• Compatibility assessment. The planner analyzes the configuration of VMware virtual machines to
verify whether they comply with limits applicable to Azure virtual machines. For example, this could
include the number, size, and performance characteristics of virtual disks or boot configuration of the
operating system.

• Cross-premises network bandwidth assessment. The planner estimates the network bandwidth
necessary to facilitate cross-premises data synchronization, including initial and delta replication.

• Azure infrastructure requirements. The planner identifies the number and type of storage accounts
and virtual machines to be provisioned in Azure.
• On-premises infrastructure requirements. The planner identifies the optimum number of
configuration and process servers.

• Initial replication guidance. The planner recommends the number of virtual machines that can
replicate in parallel to minimize the time of initial synchronization.
• Estimated infrastructure and licensing costs. The planner determines the costs that are necessary to
implement the disaster recovery site and to perform a disaster recovery test.
You must install the tool on a Windows Server 2012 R2 or Windows Server 2016 physical or virtual
computer with direct connectivity to the Hyper-V or VMware environment and to the internet. When
targeting Hyper-V hosts, the compute, memory, and storage characteristics of the server should match
the equivalent settings of the target hosts. When targeting VMware ESXi hosts, the compute, memory,
and storage characteristics of the server should match the sizing recommendations of the configuration
server available at https://aka.ms/Ltr68r.

In a Hyper-V environment, during the installation and the initial setup, use an account that is a member of
the local Administrators group on Hyper-V hosts. In addition, make sure that the TrustedHosts list of
target Hyper-V hosts includes the server where you installed the tool. Also, the TrustedHosts list of the
server where you installed the tool must include all target Hyper-V hosts. In a VMware environment, use
an account with, at minimum, read-only permissions to the VMware vCenter server and ESXi hosts.

The tool operates in three modes. During the first, you perform profiling of the existing environment by
relying on Hyper-V or vCenter performance counters, in a manner that minimizes any potential negative
performance impact. During the second mode, you generate reports based on the profiling data. You can
customize their output by specifying a desired RPO value prior to report generation. The third mode,
independently of the other two, allows you to evaluate available bandwidth between the on-premises
environment and the Azure region that you intend to use as your disaster recovery site.

Additional Reading: For more information, refer to: “Site Recovery Deployment Planner
for Hyper-V to Azure” at: https://aka.ms/K3odm6 and to: “Azure Site Recovery Deployment
Planner for VMware to Azure” at: https://aka.ms/Hwt6m6
MCT USE ONLY. STUDENT USE PROHIBITED
8-22 Planning and implementing backup and disaster recovery

Capacity planning for VMware replication to Azure


By using the deployment planner for VMware replication to Azure, you can gather essential information
for capacity planning of your Azure Site Recovery implementation. You must correlate that information
with the following constraints and recommendations that apply to the primary components of that
implementation:
• A single process server is capable of handling of up to 2 terabytes (TB) of replication traffic per day.
This affects the number of process servers that you will need to provision. It also enforces the limit on
the amount of daily changes for an individual virtual machine that you can protect by using Azure
Site Recovery. Each process server should have a separate disk of at least 600 GB in size that will
provide a disk-based cache.
• The configuration server should reside in a location with direct network access to the virtual machines
that you intend to protect.

Azure virtual machine–related requirements


You must ensure that your on-premises virtual machines comply with most of the Azure virtual machine-
specific requirements. These requirements include:

• The operating system running within each protected virtual machine must be supported by Azure.

• The virtual machine operating system disk sizes cannot exceed 2 TB when replicating Hyper-V
Generation 1 VMs, VMware VMs, or physical servers to Azure, and 300 GB when replicating Hyper-V
Generation 2 VMs to Azure.

• The virtual machine data disk sizes cannot exceed 4 TB.

• The virtual machine data disk count cannot exceed 16 when replicating Hyper-V VMs to Azure and 64
when replicating VMware VMs to Azure.

• The virtual machine disks cannot be Internet Small Computer System Interface (iSCSI), Fibre Channel,
or shared virtual hard disks.

Note: You can exclude individual disks in scenarios that involve failover to Azure from both
VMware and Hyper-V VMs.

At the time of authoring, Azure does not support the .vhdx disk type or the Generation 2 Hyper-V virtual
machine type. Instead, Azure virtual machines must use the .vhd disk type and the Generation 1 Hyper-V
virtual machine type. Fortunately, these limitations are not relevant to virtual machine protection. Site
Recovery is capable of automatically converting the virtual disk type and the generation of Windows
virtual machines when replicating virtual machine disks to Azure Storage.

Note: At the time of authoring, Site Recovery does not support Generation 2 virtual
machines that are running Linux.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 8-23

Network-related requirements
To facilitate different types of failover, you must consider the network requirements of the workloads that
you intend to protect. Keep in mind that these workloads must remain accessible following a planned,
unplanned, or test failover. To accomplish these objectives, consider the following when designing your
Azure Site Recovery–based solution:
• IP address space of the Azure virtual network hosting protected virtual machines after the failover.
You have two choices when deciding which IP address space to use:
o Use the same IP address space in the primary and the secondary site. The benefit of this approach
is that virtual machines can retain their on-premises IP addresses. This eliminates the need to
update DNS records associated with these virtual machines. Such updates typically introduce
delay during recovery. The drawback of this approach is that you cannot establish direct
connectivity via Site-to-Site VPN (S2S VPN) or ExpressRoute between your on-premises locations
and the recovery virtual network in Azure. This, in turn, implies that you must protect at least
some of the on-premises AD DS domain controllers. Failover and failback of domain controllers
require additional configuration steps, which affect the recovery time.

Additional Reading: For more information, refer to: “Use Azure Site Recovery to protect
Active Directory and DNS” at: https://aka.ms/Lbguru

o Use a nonoverlapping IP address space in the primary and the secondary site. The benefit of this
approach is that you can set up direct connectivity via Site-to-Site VPN or ExpressRoute between
your on-premises locations and the recovery virtual network in Azure. This allows you, for
example, to provision Azure virtual machines that are hosting Active Directory domain controllers
in the recovery site and keep the Azure virtual machines online during normal business
operations. By having these domain controllers available, you minimize the failover time. In
addition, you can perform a partial failover, which involves provisioning only a subset of the
protected virtual machines in Azure, rather than all of them. The drawback is that the IP
addresses of protected on-premises computers will change following a failover. To minimize the
impact of these changes, you can lower the Time To Live (TTL) value of the DNS records
associated with the protected computers.

Additional Reading: For more information, refer to: “Set up IP addressing to connect after
failover to Azure” at: http://aka.ms/Kp8i0b

• Network connectivity between your on-premises locations and the Azure virtual network that is
hosting the recovery site. You have three choices when deciding which cross-premises network
connectivity method to use:

o Point-to-Site (P2S) VPN

o Site-to-Site VPN

o ExpressRoute
Point-to-Site VPN is of limited use in this case, because it allows connectivity from individual
computers only. It might be suitable primarily for a test failover when connecting to the isolated
Azure virtual network where Site Recovery provisions replicas of the protected virtual machines. For
planned and unplanned failovers, you should consider ExpressRoute, because it offers several
advantages over Site-to-Site VPN, including the following:

o All communication and replication traffic will flow via a private connection, rather than the
internet.
MCT USE ONLY. STUDENT USE PROHIBITED
8-24 Planning and implementing backup and disaster recovery

o The connection will be able to accommodate a high volume of replication traffic.

o Following a failover, on-premises users will benefit from consistent, high-bandwidth, and low-
latency connectivity to the Azure virtual network. This assumes that the ExpressRoute circuit will
remain available even if the primary site fails.

Additional considerations when configuring Azure-based protection of


Hyper-V virtual machines
Consider the following factors when you are
configuring Azure-based protection of Hyper-V
virtual machines:

• Each Hyper-V server that is hosting virtual


machines that you want to protect must
have outbound connectivity to Azure via TCP
port 443. Both the provider and the agent
use this port. You must allow access to the
following URLs from the Hyper-V servers:
o *.accesscontrol.windows.net

o login.microsoftonline.com

o *.backup.windowsazure.com

o *.blob.core.windows.net

o *.hypervrecoverymanager.windowsazure.com

o time.nist.gov
o time.windows.net

• Depending on the outcome of your capacity planning, you might want to adjust the bandwidth that
is available to the Hyper-V replication traffic. There are two ways to accomplish this:

o Throttle bandwidth to a specific value according to the schedule that you define. You can
configure this setting from the Microsoft Azure Backup Microsoft Management Console (MMC)
snap-in. In the console, you can display the Microsoft Azure Backup Properties dialog box, and
then switch to the Throttling tab. From there, you can set the maximum bandwidth that is
available for backup operations during work and non-work hours. You can define what
constitutes the start and end of work hours.
o Increase or decrease the number of threads that are dedicated to replicating virtual disks on a
per-virtual machine basis during failover and failback. This requires direct modification of entries
in the HKLM\SOFTWARE\Microsoft\Windows Azure Backup\Replication registry key. The
UploadThreadsPerVM entry controls the number of threads dedicated to replicating the disk
data. The DownloadThreadsPerVM entry controls the number of threads when failing back
from Azure.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 8-25

Additional considerations when configuring Azure-based protection of


Hyper-V VMs in VMM clouds
Consider the following factors when you are
configuring Azure-based protection of Hyper-V
virtual machines located in VMM clouds:

• You must create virtual machine networks in


your VMM environment. You associate
virtual machine networks with VMM logical
networks, which, in turn, link to private
clouds containing protected virtual
machines. Once you create virtual machine
networks, you must map them to the
corresponding Azure virtual networks. This
ensures that, following a failover, the
network configuration in Azure matches the one in your on-premises environment. By mapping
networks, you ensure that replicas of protected virtual machines, which reside on the same on-
premises network, also reside on the same Azure virtual network. You can map multiple virtual
machine networks to a single Azure virtual network.
• You can select individual VMM clouds that will appear in the Azure portal. You can choose this option
to ensure that the Azure Site Recovery Provider running on the VMM server does not upload all your
cloud metadata to the Recovery Services vault.
• If you want to ensure that Site Recovery attaches a replica of a protected virtual machine to a specific
subnet, give the Azure virtual network subnet the same name as the virtual machine network subnet.
• The Azure Site Recovery Provider running on the VMM server must have outbound connectivity to
Azure via TCP port 443. The Azure Site Recovery Services agent running on each Hyper-V server that
is hosting the virtual machines that you want to protect also must have outbound connectivity to
Azure via TCP port 443. You must allow access to the following URLs from the VMM server and
Hyper-V servers:

o *.accesscontrol.windows.net

o login.microsoftonline.com

o *.backup.windowsazure.com

o *.blob.core.windows.net

o *.hypervrecoverymanager.windowsazure.com

o time.nist.gov

o time.windows.net
• Depending on the outcome of your capacity planning, you can adjust the bandwidth available to the
Hyper-V replication traffic on individual Hyper-V hosts. For details regarding this option, refer to the
topic “Additional considerations when configuring Azure-based protection of Hyper-V virtual
machines” in this lesson.
MCT USE ONLY. STUDENT USE PROHIBITED
8-26 Planning and implementing backup and disaster recovery

Additional considerations when configuring Azure-based protection of


VMware VMs and physical servers
Consider the following factors when configuring
Azure-based protection of VMware virtual
machines and physical servers:

• Ensure that you are using VMware vSphere


6.5, vSphere 6.0, or vSphere 5.5.

• Ensure that you are using VMware vCenter


6.5, vCenter 6.0, or vCenter 5.5 to manage
vSphere hosts.

• To use push installation of the Mobility


Service on the Windows virtual machine that
you intend to protect, ensure that the
Windows Defender Firewall allows inbound file and printer sharing and Windows Management
Instrumentation traffic. For Linux virtual machines, you should enable the Secure File Transfer
Protocol subsystem and password authentication in the sshd_config file.

• The computer hosting the configuration server component must have outbound connectivity to
Azure via TCP port 443. The computer hosting the process server component should have outbound
connectivity to Azure via TCP port 9443. You can use a different port for this purpose if needed.
Because both the process server and the configuration server components reside by default on the
configuration server, you should make sure that this server can access the following URLs over ports
443 and 9443:

o *.accesscontrol.windows.net

o login.microsoftonline.com
o *.backup.windowsazure.com

o *.blob.core.windows.net

o *.hypervrecoverymanager.windowsazure.com

o time.nist.gov

o time.windows.net
The configuration server should also be able to reach https://dev.mysql.com/get/archives/mysql-
5.5/mysql-5.5.37-win32.msi over TCP port 80.

• Depending on the outcome of your capacity planning, you can adjust the bandwidth available to the
replication traffic. In this scenario, the process server handles replication. Therefore, you can configure
its Microsoft Azure Backup throttling settings or adjust the number of upload and download threads
per virtual machine by modifying its registry. For details regarding this option, refer to the additional
considerations when you are configuring Azure-based protection of Hyper-V virtual machines. The
topic “Additional considerations when configuring Azure-based protection of Hyper-V virtual
machines” in this lesson lists these considerations.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 8-27

Additional considerations when configuring Azure-based protection of


Azure VMs
Consider these factors when you are configuring
Azure-based protection of Hyper-V virtual
machines:

• Each Hyper-V server that is hosting virtual


machines that you want to protect must
have outbound connectivity to Azure via TCP
port 443. Both the provider and the agent
use this port. You must allow access to the
following URLs from the Hyper-V servers:

o *.hypervrecoverymanager.windowsazure.com

o *.blob.core.windows.net

o login.microsoftonline.com
o *.servicebus.windows.net

• Windows and Linux Azure VMs should have the latest trusted root certificates in their certificate
stores. To accomplish this, on Windows Azure VMs, install the latest Windows updates. For Linux VMs,
adhere to the relevant guidance from their respective distributors.

• Optionally, delegate Azure Site Recovery responsibilities by using Role-Based Access Control (RBAC).
You can choose from the following predefined roles or create custom ones:

o Site Recovery Contributor. Grants all permissions necessary to perform failover and failback
operations and configure Site Recovery, but without the ability to delete the Azure Site Recovery
vault or to delegate permissions to others.
o Site Recovery Operator. Grants all permissions necessary to perform failover and failback
operations, but without the ability to configure Site Recovery.

o Site Recovery Reader. Grants permissions to view Site Recovery state and operations.
MCT USE ONLY. STUDENT USE PROHIBITED
8-28 Planning and implementing backup and disaster recovery

Check Your Knowledge


Question

Which of the following on-premises virtual machines can you protect by using Site
Recovery?

Select the correct answer.

A Generation 2 Hyper-V virtual machine running Windows Server 2016 with a


1-TB operating system VHD virtual disk

A Generation 1 Hyper-V virtual machine running Windows Server 2016 with a


4-TB operating system VHD virtual disk

A Generation 1 Hyper-V virtual machine running Windows Server 2016 with a


512-GB operating system iSCSI disk

A VMware Linux virtual machine with a 2-TB operating system virtual disk

A Generation 1 Hyper-V virtual machine running Windows Server 2016 with a


2-TB operating system VHD virtual disk
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 8-29

Lesson 4
Implementing Site Recovery with Azure as the disaster
recovery site
The Azure portal simplifies Site Recovery implementation by guiding you through the implementation
steps, asking for your design decisions, and explaining how to execute the corresponding actions. The
implementation steps reflect the recovery scenario that you have chosen as the most suitable for your
organization’s business continuity needs.

In this lesson, you will learn how to implement Site Recovery with Azure as the disaster recovery site by
using the Azure portal in the following scenarios:
• Implementing Azure-based protection of Hyper-V virtual machines without VMM.

• Implementing Azure-based protection of Hyper-V virtual machines located in VMM clouds.

• Implementing Azure-based protection of VMware virtual machines and physical servers.

• Implementing Azure-based protection of Azure VMs.

• Configuring replication of an Azure VM to another Azure region.

Lesson Objectives
After completing this lesson, you will be able to:

• Explain how to implement Azure-based protection of Hyper-V virtual machines without VMM.
• Explain how to implement Azure-based protection of Hyper-V virtual machines located in VMM
clouds.

• Explain how to implement Azure-based protection of VMware virtual machines and physical servers.

• Implement Azure-based protection of Azure VMs.

• Explain how to manage and automate Site Recovery.

Implementing Azure-based protection of Hyper-V virtual machines


without VMM
In this topic, you will step through a sample
implementation of Site Recovery with an on-
premises primary site and a secondary site that is
residing in Azure. Your intention is to protect on-
premises Hyper-V virtual machines. In this
scenario, you are not using VMM to manage
your Hyper-V hosts. Your implementation
consists of the following tasks:

1. Creating an Azure virtual network in your


Azure subscription in the Azure region that
meets your disaster recovery objectives.

2. Creating one or more Azure storage


accounts in the same subscription and the same region as the Azure virtual network.

3. Creating a Recovery Services vault in the same subscription and the same region as the storage
accounts and the virtual network.
MCT USE ONLY. STUDENT USE PROHIBITED
8-30 Planning and implementing backup and disaster recovery

4. Specifying the protection goal of your implementation. When using the Azure portal, this is the first
task of the Prepare Infrastructure stage, which you initiate from the Site Recovery blade of the
Recovery Services vault. This task involves answering the following questions:

o Where are your machines located? Select the On-premises option.

o Where do you want to replicate your machines? Select the To Azure option.

o Are your machines virtualized? Select the Yes, with Hyper-V option.

o Are you using System Center VMM to manage your Hyper-V hosts? Select the No option.
5. Setting up the source environment. In this case, you must create a Hyper-V site, which serves as a
logical container for Hyper-V hosts or clusters of Hyper-V hosts. Once you create a site, you must add
one or more Hyper-V hosts to it. Next, download the Azure Site Recovery Provider setup file and
Recovery Services vault registration key to the Hyper-V server. Run the installation by using the newly
downloaded setup file and, when you receive a prompt, provide the vault registration key.

Note: The Azure Site Recovery Provider setup file installs both the provider and the
Recovery Services agent.

6. Setting up the target environment. As part of this step, you must specify the post-failover deployment
model. In this walkthrough, you will choose Resource Manager, but Site Recovery also supports the
classic deployment model. At this point, you will also have a chance to verify that you can use the
virtual network and the storage accounts that you created earlier to host replicas of protected virtual
machines and their disks. You can create the virtual network and storage account if this is not the
case.

7. Setting up replication settings. This step involves configuring a replication policy and associating it
with the Hyper-V site that you created earlier. The policy includes settings such as copy frequency,
recovery point retention, app-consistent snapshot frequency, initial replication start time, and
encryption of data stored in Azure Storage.

8. Selecting the virtual machines to protect and enabling their replication. This is part of the Replicate
Applications stage. You will need to specify the source Hyper-V site that you defined earlier. You
also will need to select the Azure virtual network and the storage account you want to use to host the
replica of the protected virtual machine and its disks. You can also choose the target subnet. In
addition, this step involves assigning the name to the target virtual machine and choosing its
operating system. Finally, you also must choose a replication policy that you want to take effect in this
case.

Additional Reading: For more information, refer to: “Set up disaster recovery of on-
premises Hyper-V VMs to Azure” at: http://aka.ms/Hv9v2k
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 8-31

Implementing Azure-based protection of Hyper-V virtual machines located


in VMM clouds
In this topic, you will step through another
sample implementation of Site Recovery with an
on-premises primary site and a secondary site
that is residing in Azure. Your intention, in this
case, is to protect on-premises Hyper-V virtual
machines. In this scenario, you are using VMM to
manage your Hyper-V hosts. Your
implementation consists of the following tasks:

1. Creating one or more Azure virtual networks


in your Azure subscription in the Azure
region that meets your disaster recovery
objectives.

2. Creating one or more Azure storage accounts in the same subscription and the same region as the
Azure virtual network.

3. Creating a Recovery Services vault in the same subscription and the same region as the storage
accounts and the virtual network.
4. Preparing for the mapping of on-premises virtual machine networks to the Azure virtual networks.
You must make sure that all virtual machines that you intend to protect are connected to the virtual
machine networks you will be mapping to the Azure virtual networks.

5. Specifying the protection goal of your implementation. When using the Azure portal, this is the first
task of the Prepare Infrastructure stage, which you initiate from the Site Recovery blade of the
Recovery Services vault. This task involves answering the following questions:

o Where are your machines located? Select the On-premises option.

o Where do you want to replicate your machines? Select the To Azure option.

o Are your machines virtualized? Select the Yes, with Hyper-V option.
o Are you using System Center VMM to manage your Hyper-V hosts? Select the Yes option.

6. Setting up the source environment. This consists of the following steps:

a. Adding a System Center VMM server entry representing your on-premises VMM environment
and selecting the VMM cloud that is hosting the virtual machines that you intend to protect.

b. Downloading the Azure Site Recovery Provider setup file and Recovery Services vault registration
key to the VMM server. Run the installation by using the newly downloaded setup file and, when
you receive a prompt, provide the vault registration key. You will also receive a prompt to accept
or modify a Secure Sockets Layer (SSL) certificate for encryption of disks uploaded to the
Recovery Services vault. Finally, you will have the option to enable synchronization of cloud
metadata for all VMM clouds. Optionally, you can select individual VMM clouds that you want to
be visible in the Azure portal.

c. Downloading the setup file for the Azure Recovery Services agent and installing it on each Hyper-
V host in the VMM cloud that is associated with the virtual machine network that you will be
mapping to the Azure virtual network.
MCT USE ONLY. STUDENT USE PROHIBITED
8-32 Planning and implementing backup and disaster recovery

7. Setting up the target environment. As part of this step, you must specify the post-failover deployment
model. In this walkthrough, you will choose Resource Manager, but Site Recovery also supports the
classic deployment model. At this point, you will also have a chance to verify that you can use the
virtual network and the storage account that you created earlier to host replicas of protected virtual
machines and their disks. You can create the virtual network and storage accounts if this is not the
case. Finally, you must also configure network mapping between virtual machine networks and the
Azure virtual network.

8. Setting up replication settings. This step involves configuring a replication policy and associating it
with the VMM cloud that you selected in step 6a. The policy includes settings such as copy frequency,
recovery point retention, app-consistent snapshot frequency, initial replication start time, and
encryption of data stored in Azure Storage.

9. Selecting the VMM cloud and enabling its replication. This is part of the Replicate Applications
stage. You must specify the VMM cloud that you selected in step 6a. You also must select the Azure
virtual network and the storage account that you want to use to host replicas of protected virtual
machines and their disks. You can also choose the target subnet. In addition, this step involves
assigning the name to the target virtual machine and choosing its operating system. Finally, you also
must choose a replication policy that you want to take effect in this case.

Additional Reading: For more information, refer to: “Set up disaster recovery of on-
premises Hyper-V VMs to Azure” at: http://aka.ms/Hv9v2k

Implementing Azure-based protection of VMware virtual machines and


physical servers
In this topic, you will step through yet another
sample implementation of Site Recovery with an
on-premises primary site and a secondary site
that is residing in Azure. Your intention, in this
case, is to protect on-premises VMware virtual
machines and physical servers. In this scenario,
you are using VMware vCenter to manage your
vSphere hosts. Your implementation consists of
the following tasks:

1. Create an Azure virtual network in your


Azure subscription in the Azure region that
meets your disaster recovery objectives.

2. Create one or more Azure storage accounts in the same subscription and the same region as the
Azure virtual network.
3. Set up a user account on the vSphere host or vCenter server to facilitate automatic discovery of
VMware virtual machines.
4. Prepare the configuration server to allow outbound access to the Azure URLs listed in the previous
lesson and installing vSphere PowerCLI 6.0.

5. Create a Recovery Services vault in the same subscription and the same region as the storage
accounts and the virtual network.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 8-33

6. Specify the protection goal of your implementation. When using the Azure portal, this is the first task
of the Prepare Infrastructure stage, which you initiate from the Site Recovery blade of the
Recovery Services vault. This task involves answering the following questions:

o Where are your machines located? Select the On-premises option.

o Where do you want to replicate your machines? Select the To Azure option.

o Are your machines virtualized? Select the Yes, with VMware vSphere Hypervisor option.

7. Set up the source environment. This consists of the following steps:

a. Adding the configuration server entry that is representing your on-premises configuration server.

b. Downloading the Site Recovery Unified Setup installation file and the Recovery Services vault
registration key to the configuration server. Run the installation by using the newly downloaded
setup file and, when you receive a prompt, provide the vault registration key. As part of the
installation, you will set up an instance of MySQL Server and specify its admin credentials. If
needed, you will also have a chance to change the data replication port from its default of TCP
9443 to a custom value.
c. Running CSPSConfigtool.exe on the configuration server and adding the account that you set
up in step 3 that will perform automatic discovery of VMware virtual machines.
d. Adding the vCenter server and vSphere host entries that are representing your on-premises
virtualization environment in the Azure portal.

8. Set up the target environment. As part of this step, you must specify the post-failover deployment
model. In this walkthrough, you will choose Resource Manager, but Site Recovery also supports the
classic deployment model. At this point, you will also have a chance to verify that you can use the
virtual network and the storage accounts that you created earlier to host replicas of protected virtual
machines and their disks. You can create the virtual network and storage accounts if this is not the
case.

9. Set up replication settings. This step involves configuring a replication policy and associating it with
the configuration server that you added in step 7a. The policy includes settings such as RPO
threshold, recovery point retention, and app-consistent snapshot frequency.

10. Select the VMware virtual machines to protect and enable their replication. This consists of the
following steps:

a. Install the Mobility Service on the virtual machines that you intend to protect. You can initiate the
installation from the process server, either by using your existing software deployment solution,
such as System Center Configuration Manager, or doing it manually.

b. Configure the Replicate Applications settings. You must specify the vCenter server or vSphere
host that you selected in step 7d. In addition, you must select the process server if you installed it
on a computer other than the configuration server. You also must select the Azure virtual
network and the storage account you want to use to host replicas of protected virtual machines
and their disks. In addition, this step involves selecting the VMware virtual machines that you
want to protect. For each virtual machine, you can designate the account that the process server
will use to install the Mobility Service. You can also select disks that you want to exclude from
replication and specify the size of the replica Azure virtual machine. Finally, you also must choose
a replication policy that you want to take effect in this case.

Additional Reading: For more information, refer to: “Set up disaster recovery to Azure for
on-premises VMware VMs” at: http://aka.ms/Npb5bk
MCT USE ONLY. STUDENT USE PROHIBITED
8-34 Planning and implementing backup and disaster recovery

Implementing Azure-based protection of Azure VMs


In this topic, you will step through a sample
implementation of using Azure Site Recovery to
protect an Azure VM. Your implementation
consists of the following tasks:

1. Create an Azure virtual network in your


Azure subscription in the Azure region that
meets your disaster recovery objectives.

2. Create one or more Azure storage account in


the same subscription and the same region
as the Azure virtual network.

3. Create a Recovery Services vault in the same


subscription and the same region as the storage accounts and the virtual network.
4. Specify the protection goal of your implementation. When using the Azure portal, this is the first task
of the Prepare Infrastructure stage, which you initiate from the Site Recovery blade of the
Recovery Services vault. This task involves answering the following questions:

o Where are your machines located? Select the Azure - PREVIEW option.

o Where do you want to replicate your machines? Verify that the To Azure option is selected.

5. Select the protected virtual machines and enable their replication. This is part of the Replicate
Applications stage. You will need to specify the source location where the Azure VM that you intend
to protect resides, select its deployment model, and select its resource group or, in the case of a
classic VM, its cloud service. Site Recovery will identify and list the corresponding Azure VMs, and you
will be able to choose the ones that you intend to protect.

6. Configure replication settings. You can either choose the default replication settings or modify them
by designating a custom target location, target resource group, target virtual network, target storage
account, cache storage account, and, in the case of highly available VMs, target availability set. You
can also customize the corresponding replication policy, which determines such settings as recovery
point retention period, app-consistent snapshot frequency, and, in case you need to implement
multi-VM consistency, a replication group.

Additional Reading: For more information, refer to: “Set up disaster recovery for Azure
VMs to a secondary Azure region (Preview)” at: https://aka.ms/Rnxqxs
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 8-35

Managing and automating Site Recovery


After an on-premises computer appears in the
portal with the Protected status, you can
perform test failovers, planned failovers, or
unplanned failovers. When you do so, the
sequence of events differs depending on the
type of failover that you choose:

• In the case of a test failover, you specify the


Azure virtual network to which you want to
fail over. To prevent any possibility of
impacting the production environment, this
should be an isolated network. Site Recovery
provisions new Azure virtual machines in the
virtual network by using replicas of the virtual disks that are residing in Azure Storage. The protected
virtual machines stay online. After you complete your testing, Site Recovery automatically
deprovisions the Azure virtual machines.

• In the case of a planned failover, Site Recovery shuts down the protected virtual machines to prevent
the possibility of data loss. Next, it provisions the corresponding Azure virtual machines by using
replicas of virtual disks residing in Azure Storage. It also places the new virtual machines in the
commit pending state. You must perform the commit action to complete the failover. This action
removes any existing recovery points in Azure Storage.
• In the case of an unplanned failover, Site Recovery provisions Azure virtual machines by using replicas
of virtual disks residing in Azure Storage. You can instruct Site Recovery to attempt to synchronize
protected virtual machines and shut them down, but such an action might not be possible in this
scenario. Alternatively, you can choose to use the latest recovery point available in Azure Storage. Site
Recovery will place the newly provisioned Azure virtual machines in the commit pending state. You
must perform the commit action to complete the failover. This action removes any existing recovery
points in Azure Storage.

Note: With all three types of failover, if you enable data encryption when you are running
the Azure Site Recovery Provider setup, you must provide the encryption certificate as part of a
failover.

When performing planned or unplanned failover, once your primary site is back online, you should
protect the Azure virtual machines and establish reverse replication. This will allow you to fail back to the
on-premises location without data loss.

Recovery plans
While you can perform failover and failback of individual protected computers, it is preferable for business
continuity to orchestrate disaster recovery of multiple computers. Site Recovery supports this scenario by
allowing you to create recovery plans.

A recovery plan consists of one or more recovery groups, which serve as logical containers of protected
virtual machines. You arrange groups in a sequence that dictates the order in which Site Recovery failover
and failback bring the protected virtual machines online. Within this sequence, you can add pre and post
actions. Each action can represent a manual recovery step or an Azure Automation runbook. By using
Azure Automation, you can fully automate your disaster recovery. You can also use it to provision and
configure additional Azure components, such as load balancers.
MCT USE ONLY. STUDENT USE PROHIBITED
8-36 Planning and implementing backup and disaster recovery

Site Recovery uses a context variable to pass a number of parameters to the Azure Automation runbook.
You can use these parameters to customize runbook activities. These parameters include:

• RecoveryPlanName. Name of the Site Recovery plan.

• FailoverType. Type of failover (test, planned, or unplanned).


• FailoverDirection. Direction of the failover (from the primary site to Azure or from Azure to the
primary site).

• GroupID. Identifier of a group within the recovery plan.

• VmMap. Collection of virtual machines within the group.

Demonstration: Replicate an Azure VM to another Azure region


In this demonstration, you will see how to:
• Replicate an Azure VM to another Azure region.

• Disable replication.

Check Your Knowledge


Question

What components can you include in a recovery plan for a failover to Azure?

Select the correct answer.

Groups containing protected virtual machines

Manual actions

Azure Automation runbooks

Web jobs

VMM library scripts


MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 8-37

Lab: Implementing Azure Backup and Azure Site Recovery


Scenario
Adatum wants to evaluate the ability of Azure Backup to protect the content of on-premises computers
and Azure IaaS virtual machines. A. Datum Corporation also wants to evaluate Azure Site Recovery for
protecting Azure VMs.

Objectives
After completing this lab, you will be able to:

• Implement Azure Backup.

• Implement Azure Site Recovery–based protection of Azure VMs.

Note: The lab steps for this course change frequently due to updates to Microsoft Azure.
Microsoft Learning updates the lab steps frequently, so they are not available in this manual. Your
instructor will provide you with the lab documentation.

Lab Setup
Estimated Time: 60 minutes
Virtual machine: 20533E-MIA-CL1

User name: Student

Password: Pa55w.rd
Before starting this lab, ensure that you have performed the “Preparing the demo and lab environment”
demonstration tasks at the beginning of the first lesson in this module and that the setup script has
completed.

Exercise 1: Protecting data with Azure Backup


Scenario
Adatum currently uses an on-premises backup solution. As part of your Azure evaluation, you want to test
the protection of on-premises master copies of your image files and invoices by backing them up to the
cloud. To accomplish this, you intend to use Azure Backup.

Exercise 2: Implementing protection of Azure VMs by using Site Recovery


Scenario
Adatum Corporation wants to test a disaster recovery of its Azure-based Azure VMs. As part of Adatum’s
evaluation of integration with Microsoft Azure, you have been asked to use Site Recovery to configure the
protection of your test Azure VM environment.

Question: Why did the lab not include failover and failback?

Question: If you wanted to protect Azure VMs that reside behind an Azure load balancer,
how would you configure your Site Recovery solution?
MCT USE ONLY. STUDENT USE PROHIBITED
8-38 Planning and implementing backup and disaster recovery

Module Review and Takeaways


Common Issues and Troubleshooting Tips
Common Issue Troubleshooting Tip

Enabling protection of a virtual machine


fails or takes an extended period of time.

Review Question

Question: What do you think are the biggest benefits of Site Recovery?
MCT USE ONLY. STUDENT USE PROHIBITED
9-1

Module 9
Implementing Azure Active Directory
Contents:
Module Overview 9-1
Lesson 1: Creating and managing Azure AD tenants 9-2

Lesson 2: Configuring application access with Azure AD 9-16

Lesson 3: Overview of Azure AD Premium 9-24


Lab: Implementing Azure AD 9-31

Module Review and Takeaways 9-33

Module Overview
Microsoft Azure Active Directory (Azure AD) is a cloud-based identity and access management solution.
By using Azure AD, you can protect services, applications, and data with multi-factor authentication and
single sign-on (SSO). This helps secure access to cloud and on-premises resources while simplifying end
user experience.
In this module, you will learn how to create an Azure AD tenant, assign a custom domain to it, integrate
applications with Azure AD, and use Azure AD Premium features. You will also find out how to implement
Azure Role-Based Access Control (RBAC) to grant Azure AD users, groups, and applications permissions to
manage Azure resources.

Objectives
After completing this module, you will be able to:

• Create and manage Azure AD tenants.

• Configure SSO for cloud and on-premises applications and implement RBAC for Azure resources.

• Explain the functionality of Azure AD Premium including Azure Multi-Factor Authentication.


MCT USE ONLY. STUDENT USE PROHIBITED
9-2 Implementing Azure Active Directory

Lesson 1
Creating and managing Azure AD tenants
Azure AD is the service in Azure that provides cloud-based identity and access management, in addition
to directory services. You can use Azure AD to provide secure access to cloud-based and on-premises
applications and services.

In this lesson, you will learn about the basic features of the Azure AD identity management and directory
services. The lesson starts by introducing these services in relation to Active Directory Domain Services
(AD DS) and comparing these two technologies.

Lesson Objectives
After completing this lesson, you will be able to:

• Explain the role of Azure AD.


• Identify the similarities and differences between Active Directory Domain Services (AD DS) and
Azure AD.

• Manage users, groups, and devices by using the Azure portal and Microsoft Azure PowerShell.

• Explain how to manage multiple Azure AD tenants.


• Explain how to implement Azure AD Business-to-Business (B2B) and Azure AD Business-to-Consumer
(B2C) services.

Demonstration: Preparing the lab environment


Perform the tasks in this demonstration to prepare the lab environment. The environment will be
configured while you progress through this module, learning about the Azure services that you will use in
the lab.

Important: The scripts used in this course might delete objects that you have in your
subscriptions. Therefore, you should complete this course by using new Azure subscriptions. You
should also use a new Microsoft account that is not associated with any other Azure subscription.
This will eliminate the possibility of any potential confusion when running setup scripts.

This course relies on custom Azure PowerShell modules including Add-20533EEnvironment to prepare
the lab environment, and Remove-20533EEnvironment to perform clean-up tasks at the end of the
module.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 9-3

Active Directory as a component of Azure


Azure AD is a cloud-based identity and access
management service that provides SSO
functionality to thousands of Software as a Service
(SaaS) applications. Azure AD is, by design, highly
scalable and highly available. Organizations can
use Azure AD to improve employee productivity,
streamline IT processes, and improve security
when adopting cloud services or integrating their
on-premises environments with the cloud. Users
can access online applications without having to
maintain multiple user accounts.
Azure AD supports multi-factor authentication for
both on-premises and cloud-resident resources. Features such as Role-Based Access Control (RBAC), self-
service password and group management, and device registration provide additional capabilities that play
a significant role in enterprise identity management solutions.

Many applications built on different platforms such as .Net, Java, Node.js, and PHP can use industry
standard protocols such as Security Assertion Markup Language (SAML) 2.0, Web Services Federation
(WS-Federation), and OpenID Connect to integrate with the identity management provided by Azure AD.
With the support of Open Authorization (OAuth 2.0), developers can develop mobile and web service
applications that leverage Azure AD for cloud authentication and access management. They can also take
advantage of the support for Azure AD across a number of Platform as a Service (PaaS) services, such as
the Web Apps feature of Azure App Service, Azure SQL Database, or Azure Automation.

Organizations that use AD DS can synchronize users and groups from their Active Directory domains with
Azure AD to enable a SSO experience for their users accessing both on-premises and cloud-based
applications.

Overview of Azure AD
Azure AD is a Microsoft-managed, cloud-based,
PaaS identity and access management solution. It
provides secure access for organizations and
individuals to cloud-resident services such as
Azure, Microsoft Office 365, Microsoft Dynamics
365, and Microsoft Intune. It also facilitates
seamless authentication to on-premises
applications. You can use Azure AD to:

• Provision and manage users and groups.


• Configure SSO to cloud-based SaaS
applications.

• Configure access to applications.

• Implement identity protection.

• Configure Multi-Factor Authentication.

• Integrate with existing on-premises Active Directory deployments.


• Enable federation between organizations.
MCT USE ONLY. STUDENT USE PROHIBITED
9-4 Implementing Azure Active Directory

As a cloud-based service, Azure AD offers multitenancy and scalability:

• Multitenancy. Azure AD is multitenant by design, ensuring isolation between its individual directory
instances. The term tenant in this context typically represents an individual, a company, or an
organization that signed up for a subscription to a Microsoft cloud-based service such as Office 365,
Microsoft Intune, or Microsoft Azure, each of which leverages Azure AD. However, from a technical
standpoint, the term tenant represents an individual Azure AD instance. As an Azure customer, you
can create multiple Azure AD tenants. This is useful if you want to test Azure AD functionality in one
without affecting the others. Each Azure AD tenant serves as a security boundary and a container for
Azure AD objects such as users, groups, and applications.

• Scalability. Azure AD is the world’s largest multitenant directory, hosting over a million directory
services instances, with billions of authentication requests per week.

Azure AD editions
To meet a wide range of customers' needs, Azure AD is available in four editions:
• The Free edition offers user and group management, device registration, self-service password
change for cloud users, synchronization with on-premises directories, B2B collaboration, and basic
reporting. It is limited to 10 applications per user configured for SSO and 500,000 objects.

• The Basic edition extends the free edition’s capabilities by including company branding of sign-in
pages and the portal through which users access their applications, group-based access management,
and self-service password reset for cloud users. Additionally, this edition offers a 99.9% uptime service
level agreement (SLA). The Basic edition does not impose limits on the number of directory objects,
but has a limit of 10 apps per user configured for SSO, just as the Free edition does. The SSO
capability includes support for on-premises applications by leveraging Azure Active Directory
Application Proxy (AD Application Proxy).
• The Premium P1 edition is designed to accommodate organizations with the highest identity and
access management needs. In addition to features available in Azure AD Basic, it supports dynamic
groups and self-service group management, self-service password reset with password writeback for
Active Directory users, automatic password rollover for group accounts, two-way synchronization of
device objects with on-premises directories, conditional access based on group and location,
conditional access based on device state, Multi-Factor Authentication (MFA), the Cloud App Discovery
feature of Azure Active Directory, Azure AD Connect Health, advanced security and usage reports,
Microsoft Identity Manager per-user client access licenses (CALs), Azure Information Protection
support, and integration with third-party identity governance partners. It offers support for an
unlimited number of objects and unlimited number of apps per user configured for SSO.

• The Premium P2 edition offers a few significant benefits in addition to those that are available in the
Premium P1 edition. These benefits include Azure AD Identity Protection, Privileged Identity
Management, third-party MFA integration, and Cloud App Security proxy.

Note: You can join Windows 10 computers to an Azure AD tenant regardless of its edition.
However, Premium P1 and P2 additionally support auto-enrollment into Mobile Device
Management (MDM) solutions, such as Microsoft Intune, self-service BitLocker recovery,
Enterprise State Roaming, and the addition of local administrators during Azure AD join.

Additional Reading: For a comprehensive listing of features available in different


Azure AD editions, refer to: “Azure Active Directory pricing” at: https://aka.ms/C7u9xm
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 9-5

AD DS
Active Directory Domain Services (AD DS) is another Microsoft directory service and an identity
management solution. AD DS forms the foundation of enterprise networks that run Windows operating
systems. As a directory service, AD DS hosts a distributed database, residing on servers referred to as
domain controllers and storing identity data about users, computers, and applications.
Most Active Directory–-related tasks require successful authentication. To authenticate to Active Directory
successfully, users, computers, or applications must provide credentials to the authenticating domain
controller. In response to an authentication request, the domain controller issues a token that represents
the status and privileges of the token recipient. The token determines the level of access to resources such
as file shares, applications, or databases that domain computers are hosting. The basis of AD DS
authentication and authorization is the implicit trust that each domain-member computer maintains with
domain controllers. You establish this trust by joining computers to the domain, which adds an account
that represents your computer to the AD DS database.

A range of Windows Server roles, such as Active Directory Certificate Services (AD CS), Active Directory
Rights Management Services (AD RMS), and Active Directory Federation Services (AD FS), leverage the
same functionality. The AD DS database also stores management data, which is critical for administering
user and computer settings through Group Policy processing.
When comparing AD DS with Azure AD, it is important to note the following characteristics of AD DS:

• AD DS is by design single-tenant.

• AD DS is a directory service with a hierarchical X.500-based structure.


• AD DS uses Domain Name System (DNS) for locating services such as domain controllers.

• AD DS relies on protocols such as Lightweight Directory Access Protocol (LDAP) for directory lookups
and Kerberos for authentication, which were designed to operate within secure, isolated networks.

• AD DS facilitate Group Policy Objects (GPOs)–based management.

• AD DS supports users, groups, and AD-aware applications.

• AD DS supports computer objects, representing computers that join an Active Directory domain.

• AD DS supports multi-domain forests.

You can deploy an AD DS domain controller on an Azure VM to provide the same functionality as an on-
premises AD DS. Such deployment typically requires one or more additional Azure data disks because you
should not use the C drive for storing AD DS database, logs, and SYSVOL. You must set the Host Cache
Preference setting for these disks to None.

Note: Deploying an AD DS domain controller on an Azure VM is not an example of using


Azure AD. Instead it is an example of using the Azure Infrastructure as a Service (IaaS) platform to
host AD DS.

Azure AD
Although Azure AD and AD DS are both identity and access management solutions, there are some
fundamental differences between them. The following are some of the characteristics that differentiate
Azure AD from AD DS:

• Azure AD is multitenant by design.

• Azure AD object hierarchy is flat, with no support for containers or organizational units (OUs).

• Azure AD implementation does not rely on domain controllers.


MCT USE ONLY. STUDENT USE PROHIBITED
9-6 Implementing Azure Active Directory

• Azure AD supports protocols that facilitate secure communication over the internet.

• Azure AD does not support Kerberos authentication; instead, it uses protocols such as SAML,
WS-Federation, and OpenID Connect for authentication.

• Azure AD does not support LDAP; instead, it relies on Graph application programming interface (API)
for directory lookups.

• Azure AD does not provide management capabilities equivalent to those available in AD DS. For
example, it does not support GPOs. To manage Azure AD–joined devices, you can use device
management products such as Microsoft Intune.

• AD DS provides identities for users, groups, devices, and web-based applications.

Note: When you register a new application in an Azure AD tenant, besides creating an
application object which represents an actual software application, you also automatically
generate a service principal object. Service principal provides the security and authentication
context for the corresponding application. This allows you, for example, to grant permission to
this application through RBAC, as you would grant permissions to Azure AD users or groups.
If you register the same application in another Azure AD tenant, that tenant would contain only
the corresponding service principal. The application object exists only in the first Azure AD tenant
where you registered the application.

• Azure AD supports device objects representing devices that register with or join an Azure AD tenant.

• By using AD B2C, you can federate with third-party identity providers (such as Facebook). You can
also federate AD DS with Azure AD. However, the process of integrating Azure AD tenants is different
from creating AD DS domains or forest trusts.

Custom domain names


Each Azure AD tenant is assigned the default DNS domain name, consisting of a unique prefix, followed
by the onmicrosoft.com suffix. The prefix is either derived from the name of the Microsoft account you
use to create an Azure subscription or provided explicitly when you create an Azure AD tenant. It is
common to add at least one custom domain name to the same Azure AD tenant. This name utilizes the
DNS domain namespace that the tenant’s company or organization owns.

To add a custom domain name to your Azure AD tenant, you can use:
• A Microsoft cloud service portal, such as the Azure portal, Office 365 admin center, or Microsoft
Intune admin console.

• Azure Active Directory PowerShell.

To add a custom domain name to an Azure AD tenant by using one of the Microsoft cloud service portals,
perform the following steps:

1. In the portal, specify the custom domain name.


2. In the portal, note the DNS records that you need to create at your domain registrar or DNS-hosting
provider.

3. Sign in to your domain registrar or DNS-hosting provider, and create the DNS records.
4. Back in the portal, verify that the Azure AD tenant can resolve the newly created DNS records for the
custom domain.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 9-7

Before you can verify a custom domain, the domain name must already be registered with a domain
name registrar, and you must have appropriate access to create DNS records for this domain. You can
create either TXT records, which are preferable, or MX (mail exchange) records, if your DNS provider does
not support TXT records.

The following is an example of a TXT record used for custom domain verification:

Alias or Host name: @

Destination or Points to Address: MS=ms96744744

TTL: 1 hour

After verification, the administrator can designate the newly verified domain to be the primary domain for
the Azure AD tenant. For example, you can replace adatum12345.onmicrosoft.com with adatum.com.

Managing Azure AD users, groups, and devices


You can manage Azure AD users, groups, and
devices by using the Azure portal, Azure Active
Directory PowerShell, Microsoft Intune admin
console, or Office 365 admin center. There are
three basic ways to create users, groups, and
devices in Azure AD:

• As cloud identities defined directly in the


Azure AD tenant.

• As directory-synchronized identities
generated through synchronization between
on-premises Active Directory and an Azure AD
tenant. This method requires installing and
configuring specialized software that synchronizes directory objects between the two directories.
• As guest users, which represent users defined in other Azure AD tenants, users with Microsoft
accounts, or users with accounts from other identity providers.

The Azure portal provides an intuitive web interface for creating and managing users, groups, and
devices.

Creating users with the Azure portal


Using the Azure portal is the most straightforward method for creating individual user accounts. To create
a user by using the Azure portal, perform the following steps:
1. In the Azure portal, in the hub menu, click Azure Active Directory.

2. Click Users.

3. On the All users blade, click + New user.


4. On the User blade, enter the following user information:

o Name: the display name


o User name: unique name with the suffix that matches the default DNS domain name or a custom
verified DNS domain name that you associated with the Azure AD tenant. This is the name that
the new user will provide when signing in.

o Profile: first name, last name, job title, and department


MCT USE ONLY. STUDENT USE PROHIBITED
9-8 Implementing Azure Active Directory

o Properties: Source of authority (Azure Active Directory)

o Groups: groups of which the user should be a member

o Directory role: User, Global administrator, or Limited administrator. If you choose Limited
administrators, you will have the option to delegate any of the directory roles, including Billing
administrator, Compliance administrator, Conditional Access Administrator, Exchange
administrator, Guest inviter, Password administrator, Information Protection Administrator, Intune
Service administrator, Skype for Business administrator, Privileged role administrator, Reports
reader, Security administrator, Security reader, Service administrator, SharePoint administrator,
and User administrator.

5. To display the temporary, automatically generated password, select the Show Password check box.

6. Click Create to finalize the user creation.

Note: After creating a user via the Azure portal, make sure to assign the usage location
property available on the user profile blade. You must set this property if you want to assign a
license for a paid edition of Azure AD to that user.

Creating guest users with the Azure portal


To create a guest user by using the Azure portal, perform the following steps:
1. In the Azure portal, in the hub menu, click Azure Active Directory.

2. Click Users.

3. On the All users blade, click + New guest user.


4. On the Invite a guest blade, enter the following user information:

o Enter email address of the external user: user name (in the username@fqdn format)
representing a user in another Azure AD tenant or a different identity provider

o Include a personal message with the invitation: a custom message that the guest user will receive
as part of the guest user provisioning process

5. Click Invite to send the invitation email.

The email includes a link that directs the guest user to its identity provider. Once the authentication
completes successfully, the user is redirected to a web portal, which provides access to Azure AD–
registered applications that you make available to the guest user.

Note: The Access Panel is the web portal that is accessible to both Azure AD users and
guest users. You will learn about it in Lesson 3 of this module.

Additional Reading: You can reach Azure AD Access Panel directly by browsing to
https://myapps.microsoft.com

Managing devices in the Azure portal


Users can join their Windows 10 devices to Azure AD either during the first-run experience or from the
system settings. If users use their Azure AD credentials to sign in to Windows 10, they can benefit from
SSO functionality when accessing Office 365 and any other applications, web apps, or services that use
Azure AD for authentication, including the Azure portal and the Access Panel.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 9-9

You can disable the ability to join devices to Azure AD or restrict it to specific Azure AD users or groups.
You can also limit the maximum number of devices per user and enforce multi-factor authentication when
joining devices in Azure AD. These options are available from the Devices – Device settings blade in the
Azure portal.
After a user registers a device in Azure AD, you can control its usage. For example, if you determine that
the device has been lost or compromised, you can block its ability to authenticate or simply delete its
Azure AD object. If you purchased Azure AD Premium P1 or P2, you can configure conditional access
based on the device platform. If Microsoft Intune or another MDM system manages the device, you can
implement additional conditions and capabilities such as policy-based configuration and software
deployment.

Managing users, groups, and devices by using Windows PowerShell


You can also manage users, groups, and devices by using Microsoft Azure Active Directory V2 PowerShell
module. The module is available on Windows 7 or newer and Windows Server 2008 R2 or newer
operating systems, with their default versions of Microsoft .NET Framework and Windows PowerShell. You
can find it in the PowerShell Gallery at https://aka.ms/Ofa6p0. To install it, you can leverage the
functionality available via the PowerShellGet module and simply run the following command:

Install-Module -Name AzureAD

The installation requires the Windows PowerShell NuGet provider, which you can install separately by
running the following command:

Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force

Alternatively, you can choose to include the NuGet provider when installing the PowerShell module.
PowerShellGet will automatically prompt you for confirmation if it detects that the NuGet provider is
missing.

Once you have installed the module, you can connect to Azure AD by running the following command
from the Windows PowerShell prompt:

$AzureAdCred = Get-Credential
Connect-AzureAD -Credential $AzureAdCred

The first cmdlet will prompt you for the credentials to authenticate to your Azure AD tenant. To proceed,
specify a user account that is a member of the Global administrator role (or another role that grants
permissions sufficient to create user and group accounts).

To create a new user account and force the user to change the temporary password during the first sign-
in, run the following sequence of commands:

$passwordProfile = "" | Select-Object Password,ForceChangePasswordNextLogin


$passwordProfile.ForceChangePasswordNextLogin = $true
$passwordProfile.Password = 'Pa55w.rd1234'
New-AzureADUser -UserPrincipalName 'mledford@adatum.com'`
-DisplayName 'Mario Ledford' `
-GivenName 'Mario' `
-Surname 'Ledford' `
-PasswordProfile $passwordProfile `
-UsageLocation 'US' `
-AccountEnabled $true `
-MailNickName 'mledford'
MCT USE ONLY. STUDENT USE PROHIBITED
9-10 Implementing Azure Active Directory

To create a security group, run the following cmdlet:

New-AzureADGroup -Description 'Adatum Azure Team Users' `


-DisplayName 'Azure Team' `
-MailEnabled $false `
-MailNickName 'AzureTeam' `
-SecurityEnabled $true

To identify all devices registered in Azure AD along with their users, run the following cmdlet:

Get-AzureADDevice –All $true | Get-AzureADDeviceRegisteredUser

To enable or disable registered devices, run the following cmdlet:

Get-AzureADDevice –All $true | Set-AzureADDevice –AccountEnabled $false

To remove a device from Azure AD management, run the following cmdlet:

Remove-AzureADDevice -DeviceId a7892334-730b-4d49-bd13-54c2a4928009

You can also manage users, groups, and devices by using the MSOnline V1 PowerShell module for Azure
Active Directory.

Additional Reading: You can download Microsoft Azure Active Directory module for
Windows PowerShell from Azure ActiveDirectory (MSOnline) at: https://aka.ms/Jcwj06

After you install the MSOnline V1 PowerShell module for Azure Active Directory, to connect to Azure AD,
run the following command at the Windows PowerShell prompt:

Connect-MsolService

The first cmdlet will prompt you for the credentials to authenticate to your Azure AD tenant. To proceed,
specify an account that is a member of the Global administrator role (or another role that grants
permissions sufficient to create user and group accounts).
To create a user account by using Microsoft Azure Active Directory Module for Windows PowerShell, run
the following cmdlet:

New-MsolUser -UserPrincipalName mledford@adatum.com -DisplayName "Mario Ledford" -


FirstName "Mario" -LastName "Ledford" -Password 'Pa55w.rd123' -ForceChangePassword $false
-UsageLocation "US"

To create a group by using Microsoft Azure Active Directory Module for Windows PowerShell commands,
run the following cmdlet:

New-MsolGroup -DisplayName "Azure team" -Description "Adatum Azure team users"

Microsoft Azure Active Directory module for Windows PowerShell also provides cmdlets for managing
devices registered in Azure AD. For example, to query all the devices that a specific user owns, run the
following cmdlet:

Get-MsolDevice –RegisteredOwnerUpn 'mledford@adatum.com’


MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 9-11

Note: The Azure Active Directory V2 PowerShell module does not include a cmdlet
that would allow you to identify devices associated with a specific user. You can, however,
use a combination of its existing cmdlets (for example, Get-AzureADDevice and
Get-AzureADDeviceRegisteredUser) and parse their output to obtain this information.

To enable or disable registered devices, run the following cmdlet:

Enable-MsolDevice/Disable-MsolDevice

To remove a device from Azure AD management, run the following cmdlet:

Remove-MsolDevice -DeviceId a7892334-730b-4d49-bd13-54c2a4928009

Creating users by using bulk import


To create multiple Azure AD users in bulk, you can use Azure PowerShell scripting or import a comma-
separated value file (CSV file) containing account information. For example, you can export a CSV file from
an existing on-premises Active Directory instance. To perform a bulk import, you first must collect user
information. The following example illustrates a sample collection of user details that you could use to test
this functionality.

Display
UserName FirstName LastName JobTitle Department Country
Name

AnneW@adatum. Anne Wallace Anne President Management United


com Wallace States

FabriceC@adatu Fabrice Canel Fabrice Attorney Legal United


m.com Canel States

GarretV@adatum. Garret Vargas Garret Operations Operations United


com Vargas States

Given this data set, you would need to create a CSV file in the following format:

UserName,FirstName,LastName,DisplayName,JobTitle,Department,Country
AnneW@adatum.com,Anne,Wallace,Anne Wallace,President,Management,United States
FabriceC@adatum.com,Fabrice,Canel,Fabrice Canel,Attorney,Legal,United States
GarretV@adatum.com,Garret,Vargas,Garret Vargas,Operations,Operations,United States

You could then use Microsoft Azure Active Directory Module for Windows PowerShell commands to
process this CSV file and create the user accounts as shown below:

$users = Import-Csv C:\Users.csv


$users | ForEach-Object {
New-MsolUser -UserPrincipalName $_.UserName `
-FirstName $_.FirstName `
-LastName $_.LastName `
-DisplayName $_.DisplayName `
-Title $_.JobTitle `
-Department $_.Department `
-Country $_.Country
}

Note: You can use the same approach when using the New-AzureADUser cmdlet.
MCT USE ONLY. STUDENT USE PROHIBITED
9-12 Implementing Azure Active Directory

Managing Azure AD tenants


By default, you automatically get an Azure AD
tenant when you sign up for an Azure, Office 365,
Microsoft Dynamics 365, or Microsoft Intune
subscription. That tenant authenticates users
defined in its directory. You can also create
additional tenants as needed.

Note: The terms tenant and directory in the


context of Azure AD are equivalent and
interchangeable.

Note: At any given time, an Azure subscription must be associated with one, and only one,
Azure AD tenant. This association allows you to grant permissions to resources in the Azure
subscription (via RBAC) to users, groups, and service principals that exist in that particular Azure
AD tenant. Note that you can associate the same Azure AD tenant with multiple Azure
subscriptions. This allows you to use the same users, groups, and service principals to access and
manage resources across multiple Azure subscriptions.

Support for multiple Azure AD tenants facilitates the following scenarios:


• Creating separate directories for testing or other non-production purposes.

• Managing multiple Azure AD tenants by using the same user credentials—as long as the
corresponding user account is a Global administrator in each of them.

• Adding existing users as guests to multiple Azure AD tenants, eliminating the need to maintain
multiple credentials for the same user.

Adding a new Azure AD tenant


To add an Azure AD tenant, sign in to the Azure portal, click + Create a resource, click Security +
Identity, and then click Azure Active Directory. On the Create Directory blade, specify the following
settings and click Create:
• Organization name: any custom name you want to assign to the new tenant

• Initial domain name: a unique, valid DNS host name in the .onmicrosoft.com namespace

• Country or region: the geopolitical area where the Azure AD tenant will reside

Changing the association between an Azure subscription and an Azure AD tenant


To change the association between an Azure subscription and an Azure AD tenant, you must sign in as
the Service Administrator of the subscription to the Azure portal. Your account needs to be also a Global
administrator in both the current and the target Azure AD tenant.

Once you sign in to the Azure portal, in the hub menu, click Subscriptions, and on the Subscriptions
blade, click the entry representing your Azure subscription. On the subscription blade, click Change
directory. On the Change the directory blade, select the target Azure AD tenant and click Change.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 9-13

Deleting an Azure AD tenant


By using a guest user account with the Global administrator role, you can delete an Azure AD tenant if the
following conditions are met:

• You deleted all users except the guest account you are using.

• You deleted all registered applications.

• The directory is not associated with any of the cloud services such as Azure, Office 365, or Azure AD
Premium.

• No multi-factor authentication providers are linked to the directory.


To delete an Azure AD directory from the Azure portal, navigate to its blade and click Delete directory.
Review the list of requirements, verify that all of them are satisfied, and click Delete.

Implementing Azure AD B2B and Azure AD B2C

Azure AD B2B
Azure AD Business-to-Business (B2B) is a
collaboration functionality available in any Azure
AD tenant that is intended for sharing resources
with partner organizations. In a typical Azure AD
B2B scenario, the tenant contains two types of
user accounts:
1. User accounts of employees of the host
organization that owns the resources and the
tenant.
2. Guest accounts representing user accounts in the partner organization.

Partner user accounts can be either work or school accounts from the partner’s organization Azure AD
tenant. They can also originate from any identity provider, including social identities.
Azure AD B2B uses an invitation model to provide partner users with access to your applications. This is
the same mechanism that we described earlier, in the “Creating guest users with the Azure portal” section
of the “Managing Azure AD users, groups, and devices” topic of this lesson.

Azure AD B2B is highly customizable and offers a range of enhancements, including the following:

1. Support for SSO to all Azure AD–connected apps registered in the tenant of the host organization,
including Office 365, non-Microsoft SaaS apps, and on-premises apps.

2. Multi-factor authentication to hosted apps, on the tenant, app, or individual user level.

3. Support for delegation, allowing designated information workers to invite partner users.

4. Development of custom sign-in pages and invitation emails for partner users.

5. Bulk partner user provisioning by using CSV file uploads.

Additional Reading: For more information about Azure AD B2B, refer to: “What is Azure
AD B2B collaboration?” at: https://aka.ms/nlxzsb
MCT USE ONLY. STUDENT USE PROHIBITED
9-14 Implementing Azure Active Directory

Azure AD B2C
Azure AD Business-to-Consumer (B2C) is a dedicated Azure AD tenant intended for providing individual,
institutional, and organizational customers with access to custom web apps, mobile apps, API apps, and
desktop apps. In a typical Azure AD B2C scenario, the tenant contains customer user accounts only. These
accounts can reside directly in the Azure AD B2C tenant or can originate from any identity provider,
including social identities.

Note: Azure B2C is a distinct product offering, separate from the Azure AD tenant that is
provisioned as part of your Azure subscription. Support for federating an Azure B2C tenant with
an Azure AD tenant is in preview at the time of authoring this content. This support allows Azure
AD users to access Azure B2C applications.

Azure AD B2C offers Identity as a Service (IDaaS) for your applications by supporting OpenID Connect,
OAuth 2.0, and SAML. Azure AD B2C eliminates the requirements for developers to write a code for
identity management and for storing identities in on-premises databases or systems. It simplifies and
standardizes consumer identity management by allowing your consumers to sign up for and sign in to
your applications by using their social accounts. These accounts can originate from identity providers such
as Facebook, Google, Amazon, LinkedIn, and Microsoft account. A number of other identity providers,
including Twitter, WeChat, Weibo, and QQ, are in preview at the time of authoring this content. Users can
also create their accounts directly in the Azure B2C tenant.
To start using Azure AD B2C, you must create a new tenant by performing the following steps:

1. Sign in to the Azure portal.

2. In the hub menu, click + Create a resource. On the New blade, in the search text box, type Azure
Active Directory B2C, and then press Enter.

3. On the Azure Active Directory B2C blade, click Create.


4. On the Create new B2C Tenant or Link to existing Tenant blade, select the first of the following
two options:

o Create a new Azure AD B2C Tenant

o Link an existing Azure AD B2C Tenant to my Azure subscription

5. On the Azure AD B2C Create Tenant blade, specify the following, and then click Create:

o Organization name: any custom name you want to assign to the new tenant

o Initial domain name: a unique, valid DNS host name in the .onmicrosoft.com namespace
o Country or region: the geopolitical area where the Azure AD tenant will reside

6. Once the provisioning completes, click the Click here, to manage your new directory link. This will
open the Azure AD B2C blade in the Azure portal.

Note: To use a B2C tenant in a production environment, you must link it to an Azure
subscription for communication, billing, and support purposes. To accomplish this, repeat the
procedure described above, but select the second of the two options listed in step 4.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 9-15

You must register applications that are integrated with Azure AD B2C in your B2C directory. You can
complete this registration in the Azure portal. During the registration process, each application gets a
unique Application ID and Redirect Uniform Resource Identifier (URI) or Package Identifier. B2C supports
native apps, mobile apps, web apps, and web APIs that are using the App Model 2.0 registration model.
Developers use Application ID and Redirect URI to configure authentication for their applications.
To register an application in an Azure AD B2C tenant, perform the following steps:

1. On the Azure AD B2C blade, click Applications.

2. Click +Add.

3. On the New application blade, type the name of the application:

o If you are registering a web application, toggle the Include web app/web API switch to Yes.
Allow or disallow implicit flow by using another switch and type the value as Reply URL. This
designates an endpoint where Azure AD B2C will send authentication tokens. Optionally, provide
an App ID URI. This value serves as a unique identifier of the web API.
o If you are deploying a native client app, such as a mobile or a desktop app, toggle the Include
native client switch to Yes. Copy the autogenerated Redirect URI and provide a Custom
Redirect URI.

4. Click Create to register your application.


5. On the Azure AD B2C – Applications blade, click the application that you just created, and copy the
globally unique Application ID that your developers will need to reference in the application code.
6. If you want to facilitate secure communication between the application and the web API that Azure
AD B2C provides, generate application keys from the application Keys blade.

The next step in providing access to applications available via Azure AD B2C is to define policies. Policies
define consumer experience during actions related to identity management that AD B2C provides, such as
sign-up, sign-in, or password resets. For example, policies can restrict identity providers, specify the
information that prospective users must provide when signing up, or enforce the use of multi-factor
authentication. You can define multiple policies and apply each of them to any application registered with
the tenant. You can accomplish this task directly from the policy blade in the Azure portal.

Additional Reading: For more information about Azure AD B2C, refer to: “Azure AD B2C:
Focus on your app, let us worry about sign-up and sign-in” at: https://aka.ms/nlxzsb

Demonstration: Managing Azure AD users, groups, and devices


In this demonstration, you will learn how to:

• Create a new directory called Adatum.

• Create a new Global Administrator user account.

• Join a Windows 10–based computer to Azure AD.

Question: What are the similarities between AD DS and Azure AD?

Question: Can you use Group Policy in Azure AD?


MCT USE ONLY. STUDENT USE PROHIBITED
9-16 Implementing Azure Active Directory

Lesson 2
Configuring application access with Azure AD
As the number of cloud-based applications grow, their management becomes increasingly challenging.
Administrators must ensure that they provide end users with secured application access. However, a focus
on security should not affect negatively users’ sign-in experience.

Azure AD addresses these challenges by allowing you to implement SSO for authenticating to cloud and
on-premises applications. Additionally, Azure AD allows you to restrict access to Azure-based resources
through RBAC.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe how to add publicly-accessible applications to Azure AD.

• Describe how to add on-premises applications to Azure AD.

• Describe how to configure access to Azure AD-integrated applications.

• Implement RBAC.

Adding publicly accessible applications to Azure AD

Azure Marketplace Azure AD apps


Azure Marketplace Azure AD apps provide direct
integration with Azure AD. The integration offers
features such as SSO and, in some cases, automatic
user provisioning. Examples of Marketplace
applications include Office 365, Dropbox for
Business, and Salesforce.

Additional Reading: To view all currently


available commercial Azure AD applications, go to
the Azure Marketplace at: http://aka.ms/Htfnef
and then click Azure Active Directory apps.

At the time of authoring this course, more than 2,900 SaaS applications are integrated with Azure AD for
authentication and authorization. You can configure and manage applications from the Enterprise
Application blade of the Azure AD tenant in the Azure portal. To add an application from the gallery,
perform the following steps:

1. Sign in to the Azure portal with an account that has the Global administrator role.

2. Navigate to the blade of your Azure AD tenant.

3. Click Enterprise Applications.

4. On the Enterprise Application – All applications blade, click +New application.

5. On the Categories blade, click All or click the category in which you are interested.

6. On the Add an application blade, select the application which you want to add to your Azure AD
tenant.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 9-17

7. Once you added the app, from the app blade, you will be able to assign app access to individual
Azure AD users or, with Basic and Premium Azure AD editions, groups.

8. From the app blade, you will be able also to configure the single sign-on settings for the app.

SaaS applications not listed in the gallery


If a web-based, publicly accessible application is not available via Azure Marketplace, you can still
integrate it with Azure AD if the SaaS application supports Azure AD authentication protocols or if the
application has an HTML-based sign-in page with a password SSO.

For SaaS applications that support SAML, WS-Federation, or OpenID Connect, authentication with
Azure AD is established by using a signing certificate that is generated by the Azure AD tenant. For SaaS
applications that feature HTML-based sign-in page, authentication is enabled by leveraging Azure AD
support for password-based SSO.

Note: Adding custom Shared Access Signature (SAS) applications that support WS-
Federation or OpenID Connect requires writing custom code. You can add custom SAS
applications that support SAML 2.0 directly from the Azure portal.

To add a SaaS application that supports SAML but is not listed in the gallery, perform the following steps:

1. Sign in to the Azure portal with the account that has the Global administrator role.

2. Navigate to the blade of your Azure AD tenant.

3. Click Enterprise Applications.

4. On the Enterprise Application – All applications blade, click +New application.


5. On the Categories blade, click All or click the category in which you are interested.

6. On the Add an application blade, select the application which you want to add to your Azure AD
tenant.
7. On the Enterprise Application blade, click Non-gallery application.

8. On the Add your own application blade, type the name you want to assign to your application. This
name will be visible to your users after you grant them access to the application.
9. Click Add.

10. From the Quick start blade of the application, configure its properties.

Note: Adding custom applications requires Azure AD Premium.

For custom SAML–based applications, to implement SSO authentication, you must configure the following
settings:

• Identifier. A unique identifier for the application for which SSO is being set up.

• Reply URL. The URL where the application expects to receive the authentication token.
MCT USE ONLY. STUDENT USE PROHIBITED
9-18 Implementing Azure Active Directory

Based on this information, Azure AD will generate a certificate and the following three URLs that need to
be configured with the SaaS application:

• Issuer URL. This is the value that appears as the Issuer inside the SAML token issued to the
application.

• Single Sign-On Service URL. This is the endpoint that is used for sign-in request.

• Single Sign-Out Service URL. This is the endpoint that is used for sign-out request.

Adding on-premises applications to Azure AD


Azure AD Application Proxy is a cloud service that
facilitates integration of on-premises, web
browser-based applications (such as SharePoint
sites, Outlook Web Access, and IIS-based
applications) with Azure AD. The Azure AD
Application Proxy relies on reverse-proxy
mechanism to provide access from internet to
HTTP and HTTPS endpoints within your internal
network.
To implement such access via Azure AD
Application Proxy, you must install a software-
based connector on an on-premises server with
direct access to the web application. The connector establishes a persistent, outbound connection to the
Application Proxy service over TCP ports 80 and 443.
Azure AD Application Proxy provides access to AD DS-based applications by using the following
procedure:

1. The user attempts to access the Azure AD Application Proxy–published application via a web browser
from a device outside the company perimeter network.

2. The Application Proxy redirects the user sign-in to Azure AD for authentication.

3. The user obtains the token from Azure AD and presents it to the Application Proxy, which retrieves
the user principal name (UPN) and service principal name (SPN).

4. The connector installed in the internal network retrieves the user attributes via the outbound
connection to the Application Proxy and requests a Kerberos ticket on behalf of the user from AD DS.
This process relies on Kerberos Constrained Delegation.

5. AD DS returns the Kerberos ticket to the connector.

6. The connector presents that ticket to the application.

7. The application verifies the access, and responds to the client request through the Application Proxy.
The Azure AD Application Proxy requires either Basic or Premium edition of Azure AD. You can enable it
from the Application proxy blade of the Azure AD tenant in the Azure portal. From the same blade, you
can download the connector software and install it on your on-premises computers. This install process
sets up two Windows services, Microsoft AAD Application Proxy Connector and Microsoft AAD
Application Connector Proxy Connector Updater.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 9-19

To publish an internal application and make it accessible to users outside your private network, perform
the following steps:

1. Sign in to the Azure portal with the account that has the Global administrator role.

2. Navigate to the blade of your Azure AD tenant.

3. Click Enterprise Applications.


4. On the Enterprise Application – All applications blade, click +New application.

5. On the Add an application blade, click On-premises applications.

6. On the Add your own on-premises application blade, configure Application proxy settings,
including:

o Specifying Internal Url for access to the application from inside your on-premises network.

o Specifying External Url for access to the application from internet.

o Setting Pre Authentication to either Azure Active Directory or Passthrough authentication.

o Disabling or enabling the Translate URLs in Headers and Translate URLs in Application Body
settings depending on whether the application requires the original host header in the request.
o Assigning Connector Group to isolate applications on per connector basis.

Configuring access to Azure AD–integrated applications


There are several ways to make Azure AD–
integrated applications available to end users. The
most common approach involves using the Access
Panel, which is a web-based portal accessible at
https://myapps.microsoft.com.

A user must successfully authenticate to view the


portal interface. The portal interface contains the
applications page, which automatically displays a
list of applications to which the user is entitled.
You manage this entitlement by assigning
applications to individual users or to groups of
users.

Users sign in to the Access Panel by providing their Azure AD credentials. To avoid additional
authentication prompts when launching applications from the panel, you should configure SSO.
SSO allows users to run Azure AD–registered applications without providing a user name and password if
they have already successfully authenticated. Such applications might include software as a service (SaaS)
applications available from the Azure AD application gallery and custom applications developed in-house,
which reside on-premises or are registered in Azure AD. With SSO, users do not have to remember their
credentials for each SaaS application.
MCT USE ONLY. STUDENT USE PROHIBITED
9-20 Implementing Azure Active Directory

You can use the following three mechanisms to implement application SSO support:

• Password-based SSO with Azure AD storing credentials for each user of a password-based
application. When Azure AD administrators assign a password-based SSO app to an individual user,
they can enter app credentials on the user's behalf. Alternatively, users can enter and store credentials
themselves directly from the Access Panel. In either case, when accessing a password-based SSO app,
users first rely on their Azure AD credentials to authenticate to the Access Panel. Next, when they
open an app, Azure AD transparently extracts the corresponding app-specific stored credentials and
securely relays them to the app provider within the browser's session.

• Federated SSO, with Azure AD leveraging federated trusts with providers of SSO applications, such as
Box or Salesforce. In this case, an application provider relies on Azure AD to handle users’
authentication, and accepts an Azure AD–generated authentication token when granting access to
the application.

• Existing SSO, with Azure AD leveraging a federated trust between the application and an SSO
provider, established by using an existing security token service (STS) implementation such as AD FS.
This is similar to the second mechanism because it does not involve separate application credentials.
However, in this case, when users access the Access Panel application, your current SSO solution
handles their authentication requests.

Note: In each of these cases, Azure AD serves as a central point of managing application
authentication and authorization.

Besides providing access to applications, the Access Panel also allows users to edit their profile settings,
change their password, and provide identifying information necessary when performing password resets.
Users can also edit multi-factor authentication settings and view their account details such as their user ID,
alternative email, and phone numbers. In addition, if you implement self-service group management,
delegated users will be able to view and modify group membership from the groups page within the
Access Panel interface.
Internet Explorer 8 and newer versions, Chrome, and Firefox all support the Azure AD Access Panel. You
can also use it on any other browser that support JavaScript and CSS. As part of the initial setup, you will
need to install the Access Panel browser extension. You will be prompted to install it the first time you
attempt to start an application via the Application Access Panel interface.

Implementing RBAC
RBAC enables fine-grained access management of
resources that exist in an Azure subscription. This
mechanism relies on predefined and custom-
defined roles to grant users and groups that reside
in Azure AD permissions necessary to conduct
role-specific actions on a subscription, resource
group, or resource level.

Note: When assigning permissions via RBAC,


you have to choose identities from the Azure AD
tenant that is associated with your subscription.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 9-21

Azure AD identities to which you can grant RBAC-based permissions include users, guest users, groups,
service principals, and managed service identities. A Managed Service Identity is an Azure AD object that
represents an instance of an Azure service. This provides a security context for code running within that
instance, which then allows you to specify the level of access that this code will have within your Azure
subscription. Managed Service Identity supports Azure VMs, virtual machine scale sets, Azure App Service
apps, Azure Functions, Azure Service Bus, and Azure Event Hubs.

Note: At the time of authoring this content, the Managed Service Identity feature is in
public preview.

By using RBAC, you can implement delegated management of cloud resources. For example, you can
allow your development team to create their own virtual machines, but limit virtual networks to which
those machines can be connected.

RBAC built-in roles


RBAC has three basic built-in roles that apply to all resource types:
• Owner. This role provides full access to all the resources in the scope of the role, including the ability
to delegate access to these resources.

• Contributor. This role allows you to create and manage all types of resources in the scope of the role,
without the ability to delegate access to these resources.
• Reader. This role provides view-only access to Azure resources in the scope of the role.

In addition, there is a large number of resource type-specific built-in RBAC roles with predefined
permissions that further narrow access to resources. Examples of built-in, resource type-specific roles
include virtual machine contributor or SQL database contributor.

Additional Reading: For the list of built-in roles, refer to: http://aka.ms/Cge87w

To configure RBAC, you can use the Azure portal, Azure PowerShell, and Azure CLI. Permissions granted
through RBAC are always inherited from the parent scope by child scopes. This means that the RBAC-
based permissions you assign on the subscription level will apply to all of its resource groups and
resources. Similarly, the RBAC-based permissions you assign to a resource group will apply to all of its
resource.

Note: The Owner role at the subscription level has permissions to subscription resources
that are equivalent to permissions of the Service administrator. However, only the Service
administrator has the ability to change the association between the Azure subscription and an
Azure AD tenant.

Azure RBAC allows you to manage permissions at the management plane of Azure resources, such as
creating a SQL database, However, you cannot use RBAC for delegating management of data plane
operations within Azure resources such as creating a table within a SQL database.

If predefined built-in roles do not meet your expectations, you can create custom roles by using Azure
PowerShell or Azure CLI. Custom roles you define get stored in the Azure AD tenant associated with your
subscription, allowing you to share them across multiple subscriptions.
MCT USE ONLY. STUDENT USE PROHIBITED
9-22 Implementing Azure Active Directory

Note: At the time of authoring this content, you cannot create custom roles by using the
Azure portal. This requires the use of Azure PowerShell, Azure CLI, or REST API.

Additional Reading: For more information regarding creating custom roles, refer to the
list of built-in roles at: https://aka.ms/Fivzy4

Managing RBAC by using the Azure portal


To manage RBAC by using the Azure portal, perform the following steps:

1. In the Azure portal, navigate to the Access control (IAM) blade of the resource, resource group, or
subscription to which you intend to grant permissions via RBAC.

2. Click + Add.

3. On the Add permissions blade, in the Role drop-down list, select the role that you want to assign.

4. In the Assign access to drop-down list, select Azure AD user, group, or application, Function
App, App Service, Virtual Machine, or Virtual Machine Scale Set, depending on which type of
identity you want to use.

5. In the Select text box, type the full or partial name of the user, guest user, group, service principal, or
Managed Service Identity to which you want to assign the role. Alternatively, you can pick one or
more entries from the list of Azure AD identities appearing below the text box.

6. Click Save to confirm the selection.


You can also remove access from the Access control (IAM) blade of the resource, resource group, or
subscription, but you cannot remove inherited access at the child level.

Manage RBAC by using Azure PowerShell


You can manage RBAC by using Azure PowerShell. Azure PowerShell includes the following cmdlets to
manage role assignments:

• Get-AzureRmRoleAssignment. Retrieves the roles assigned to a user.

• Get-AzureRmRoleDefinition. Lists the definition for a role.


• New-AzureRmRoleAssignment. Assigns a role assignment to a user or a group.

• Remove-AzureRmRoleAssignment. Removes a role assignment from a user or a group.

For example, the following command adds a user to the Reader role at the specified scope:

New-AzureRmRoleAssignment -UserPrincipalName user@somedomain.com -RoleDefinitionName


Reader -Scope /subscriptions/GUID/resourceGroups/ResourceGroupName

Manage RBAC by using Azure CLI


You can manage RBAC by using the Azure CLI. Azure CLI includes the following commands to manage
role assignments:

• az role assignment list. Retrieves the roles assigned to a user.

• az role show. Lists the definition for a role.

• az role assignment create. Assigns a role assignment to a user or a group.

• az role assignment delete. Removes a role assignment from a user or a group.


MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 9-23

Demonstration: Integrating SaaS apps with Azure AD and configuring


RBAC
In this demonstration, you will learn how to:

• Add a directory application and configure SSO.

• Implement RBAC.

Question: How can you centrally manage identities, and access to applications and resources
in the cloud?
MCT USE ONLY. STUDENT USE PROHIBITED
9-24 Implementing Azure Active Directory

Lesson 3
Overview of Azure AD Premium
Features, such as password write-back or group self-service management increase overall user
productivity and reduce administrative overhead for enterprises. These features and other, more advanced
capabilities such as enhanced auditing, reporting, monitoring, and multi-factor authentication for non-
privileged users require Azure AD Premium licensing.

Lesson Objectives
After completing this lesson, you will be able to:

• Identify the features of Azure AD Premium.


• Describe the purpose of Azure Multi-Factor Authentication.

• Explain how to configure advanced Azure Multi-Factor Authentication settings.

• Explain the purpose of Azure AD Privileged Identity Management and Identity Protection.

Introducing Azure AD Premium


The Azure AD Premium edition provides
additional functionality beyond the features
available in the Free and Basic editions. However,
this edition introduces additional licensing cost
per user. Microsoft provides a free trial that covers
100 user licenses that you can use to become
familiar with the full functionality of the Azure AD
Premium edition.

The following features are available with the


Azure AD Premium edition:

• Self-service group and application


management. This feature minimizes
administrative overhead by delegating permissions to create and manage Azure AD groups and to
provide access to Azure AD-registered applications. Users can create requests to join groups and
obtain access to apps. Delegated admins can approve requests, maintain group membership, and
assign users to applications.

• Dynamic groups. In addition to creating groups and assigning their members explicitly, you can also
create dynamic groups, in which membership changes occur automatically, according to the rules you
define. These rules contain Azure AD object attribute–based criteria, which determine whether a user
or a device should be a member of a particular group.

• Conditional access. With this feature, you can implement conditional access to your applications.
Conditions can include the following criteria:

o Group membership. The user must belong to a group you designate.

o Location. The user must reside in a specific location; for example, a trusted network.

o Device platform. The user must use a device running a specific operating system, such as iOS,
Android, Windows 10 Mobile, or Windows.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 9-25

o Device status. The device must be compliant at the time when the user attempts access. For
example, you might want to ensure that the device is registered in Azure AD or enrolled into
your mobile device management solution.

o Risk policy. Azure AD Identity Protection determines the acceptable risk level associated with
users attempting access.

If a user or device does not meet the criteria you choose, you can block access or enforce multi-factor
authentication.

• Advanced security reports and alerts. You can monitor access to your cloud applications by viewing
detailed logs that show anomalies and inconsistent access patterns. Advanced reports are machine
learning based and help you improve access control and detect potential threats.

• Multi-Factor Authentication. Full Multi-Factor Authentication works with on-premises applications


(using VPN, RADIUS, and others), Azure, Office 365, Dynamics 365, and third-party Azure AD gallery
applications. You can also implement third-party MFA solutions. Multi-Factor Authentication is
covered in more details later in this lesson.

• Microsoft Identity Manager (MIM) licensing. MIM integrates with Azure AD Premium to provide
hybrid identity solutions. MIM can seamlessly bridge multiple on-premises authentication stores such
as AD DS, LDAP, or Oracle with Azure AD. This provides consistent end user experience when
accessing on-premises LOB applications and SaaS solutions.

• Enterprise SLA of 99.9%. You are guaranteed 99.9% availability of the Azure AD Premium service. The
same SLA applies to Azure AD Basic.

• Password reset and account unlock with writeback. Users have the ability to unlock their on-premises
accounts and reset their passwords by leveraging Azure AD functionality.

• Device writeback. In hybrid scenarios, where an on-premises AD DS forest integrates with an Azure
AD tenant via Azure AD Connect, you can register a user’s device in Azure AD and replicate its object
to the on-premises AD DS forest.
• Cloud App Discovery. This feature allows you to discover cloud-based applications used by on-
premises users. It provides you with information about usage of cloud apps, including number of
users per app, number of web requests per app, and the time spent working with each app. Cloud
App Discovery uses software agents that must be installed on users' computers. You can deploy the
agents by using Group Policy deployment or Microsoft System Center Configuration Manager. Agents
monitor cloud app access and then send collected data to the Cloud App Discovery service by using
an encrypted channel. You can view reports based on this data in the Azure portal.

• Cloud App Security proxy. This functionality enhances conditional access by routing requests that
satisfy the specified conditions to the Cloud App Security environment, which enforces additional
access and session controls in real time. For example, you can define policies that prevent download
of certain documents that you designate as sensitive or require their encryption prior to a download.
You can also restrict or block access to specific applications.

• Azure AD Connect Health. You can use this tool to gain insight into operational aspects of Azure AD
Connect, which implements directory synchronization between AD DS and Azure AD. It collects alerts,
performance counters, and usage patterns, and presents the collected data in the Azure portal. You
will learn more about Azure AD Connect in module 10 of this course.

• Azure AD Identity Protection and Privileged Identity Management (PIM). This functionality offers
enhanced control and monitoring of Azure AD privileged users. Identity Protection and Privileged
identity Management are covered in more details later in this lesson.
MCT USE ONLY. STUDENT USE PROHIBITED
9-26 Implementing Azure Active Directory

• Integration with Azure Information Protection. Azure Information Protection facilitates classification
of documents and emails to control access to their content. It leverages Azure Active Directory as its
identity provider.

• Windows 10 Azure AD Join–related features. The features in this category include support for auto-
enrollment into a Mobile Device Management solution, such as Microsoft Intune, self-service
BitLocker recovery, Enterprise State Roaming, or the ability to add local administrators to Azure AD-
joined Windows 10 devices.

Azure Multi-Factor Authentication


Azure Multi-Factor Authentication adds a layer of
security in the authentication process by requiring
multiple methods of verifying user identity. Multi-
factor authentication combines something that
you know, such as a password or a PIN, with
something that you have, such as your phone or a
token, and/or something that you are (biometric
technologies).

You can implement Azure Multi-Factor


Authentication in several ways, based on users’
capabilities and the level of additional security that
they need. Your options include:

• A mobile app to provide one-time passwords or to receive push notifications from the application.

• A phone call.
• A text message, which is very similar to the mobile app authentication. method, but push notifications
or authentication codes are delivered via text messages.

• A third-party OAuth token.


Depending on your licensing arrangements and the services your users access, you have the following
options to implement Azure Multi-Factor Authentication when authenticating against Azure AD:
• Complementary Multi-Factor Authentication for administrators. Global Administrator users can use
multi-factor authentication free of charge.

• Multi-factor authentication included in Azure AD Premium, Azure MFA, or Enterprise Mobility +


Security (EMS). These offers cover the MFA functionality for every licensed user. You simply have to
assign a license to a user and configure the corresponding MFA settings.

• Azure Multi-Factor Authentication Provider. This allows you to extend the multi-factor authentication
functionality to non-administrators without purchasing Azure AD Premium, Azure MFA, or EMS
licenses. The MFA-related charges become part of the Azure subscription billing. You have the choice
of per-authentication or per-user provider, which affects the pricing model. The first one of them is
more beneficial if you have a larger number of users who authenticate via MFA only occasionally. The
second of them will be more cost-effective if there are few users who use MFA frequently.

• A subset of the Azure Multi-Factor Authentication functionality is included in Office 365. Multi-factor
authentication for Office 365 does not incur additional cost besides an Office 365 subscription license.
However, this works only with Office 365 applications.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 9-27

Note: Only the second and the third of these options offer a number of advanced MFA
features. You will learn more about these features in the next topic of this lesson.

Another consideration when choosing the MFA approach is the location of user accounts and resources
you want to protect (on-premises or in the cloud). Based on this consideration, you can:

• Deploy Multi-Factor Authentication in the cloud. This is used mostly if the main goal is to secure
access to first-party Microsoft apps, SaaS apps from the Azure Marketplace, and applications
published through Azure AD Application Proxy. This option is viable as long as user accounts are
available in Azure AD. It is not relevant whether they were created in Azure AD directly or they
represent synchronized or federated AD DS users.

• Deploy Multi-Factor Authentication on-premises. This option is applicable when user accounts reside
in AD DS, including scenarios where the user accounts are federated with Azure AD. This provides an
additional level of protection for remote access solutions, such as VPN or Remote Desktop Gateway.
In addition, this approach is applicable to IIS applications not published through Azure AD App Proxy.

The implementation details depend on the version of the operating system hosting the AD FS role. With
Windows Server 2012 R2 or older, you need to install the Multi-Factor Authentication server and
configure it with an on-premises Active Directory. With Windows Server 2016, you can leverage the Azure
MFA adapter, built into the operating system.

Additional Reading: For more information on configuring MFA with Windows Server
2016-based AD FS, refer to: “Configure AD FS 2016 and Azure MFA” at: https://aka.ms/xxj3y4

Additional Reading: For detailed comparison between these options, refer to:
https://aka.ms/Cmtwvs

Exploring advanced Multi-Factor Authentication settings


Azure MFA included in Azure AD Premium, Azure
MFA, or Enterprise Mobility + Security (EMS) and
implemented via Azure Multi-Factor
Authentication Provider offers a number of
advanced features in the following sections.

Fraud Alert
The Fraud Alert feature allows users to report
fraudulent attempts to sign in by using their
credentials. If a user receives an unexpected multi-
factor authentication request, the user can
respond with the fraud alert code (0# by default)
to report an attempt to gain unauthorized access.
The fraud alert automatically blocks the authentication request. You can also enable the option to block
the user's account, so that subsequent authentication attempts are automatically denied. Additionally, it is
also possible to configure email notifications to a custom email address, facilitating notifications to
administrative or security teams. After appropriate remediation action has been taken, including changing
the user's password, an administrator can then unblock the user's account.
MCT USE ONLY. STUDENT USE PROHIBITED
9-28 Implementing Azure Active Directory

One-Time Bypass
One-Time Bypass is a setting that allows a user to sign in temporarily without using Multi-Factor
Authentication. The bypass expires after the number of seconds that you specify. This can be useful if a
user needs to use an Azure MFA protected resource or application, but is not able to access a phone for
text messaging or automated calls, or the Multi-Factor Authentication app. The default one-time bypass
period is five minutes.

Custom Voice Messages


Custom Voice Messages allow administrators to customize the messages that Multi-Factor Authentication
process uses during automated voice calls to an office phone. This replaces standard recordings that are
supplied with Multi-Factor Authentication.

Trusted IPs
Trusted IP addresses allow administrators to bypass Multi-Factor Authentication for users who sign in
from a specific location, such as the company’s local intranet. You configure this option by specifying a
range of IP addresses corresponding to this location. In federated scenarios, you have the option of using
the All Federated Users instead.

App Passwords
App Passwords allow users that have been enabled for multi-factor authentication to use non-browser
clients that do not support modern authentication to access Azure AD protected apps or resources.
Examples of such clients include, for example, Outlook 2010.

Remember Multi-Factor Authentication for trusted devices


The Allow users to remember multi-factor authentication on devices they trust setting allows users
to suspend enforcement of Multi-Factor Authentication for a defined period of time on a specific device.
This requires at least one successful authentication on that device. The default period of time is 14 days
but you can extend it to 60 days.

Caching
With caching enabled and configured, after a user successfully authenticates through MFA, subsequent
authentication attempts from the same user will automatically succeed within the time that you specify,
without additional MFA prompts.
In addition to the above settings, there are some user-specific MFA settings that enhance security in case
of a stolen or lost device.

Require selected users to provide contact methods again


This setting will require users to complete the MFA registration process. This automatically invalidates the
current Allow users to remember multi-factor authentication on devices they trust and One-time
bypass options.

Delete all existing app passwords generated by the selected users


This setting will invalidate existing app password for non-browser applications which do not support
modern authentication.

Restore multi-factor authentication on all remembered devices


In case a user loses a device configured with the Allow users to remember multi-factor authentication
on devices they trust, this setting reinstates Multi-Factor Authentication for that device.

Additional Reading: For more information regarding advanced MFA settings, refer to:
https://aka.ms/Ed7eot
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 9-29

Demonstration: Configuring and using Azure AD Premium Multi-Factor


Authentication
In this demonstration, you will learn how to:

• Create a Multi-Factor Authentication provider.

• Configure fraud alerts.

• View fraud alert reports.


• Configure one-time bypass settings.

• Create a one-time bypass.

• Configure trusted IP addresses.

• Enable users to create app passwords.

Azure AD Privileged Identity Management and Identity Protection


Azure AD Privileged Identity Management
facilitates identifying and controlling privileged
identities and their access to Azure AD-protected
resources, including Microsoft Azure, Office 365,
and Microsoft Intune. You can use Azure AD
Privileged Identity Management to discover the
users who have Azure AD administrative roles,
track the usage of these roles, and generate
reports summarizing this usage. In addition, Azure
AD Privileged Identity Management allows you to
delegate Azure Active Directory administrative
access on demand by implementing just-in-time
administration, which minimizes risks associated with permanent access security model. You restrict the
delegation to a subset of users by designating them as eligible admins for a particular Azure Active
Directory role. Eligible admins have to request a role activation to gain corresponding privileges.
Depending on your preferences, requests might require approvals. You can also delegate the ability to
provide approvals to other users. In addition, you have the option of extending the elevation to apply to
RBAC roles.

Additional Reading: For more information regarding using Privileged Identity


Management for delegating access to Azure resources, refer to: https://aka.ms/Hg4eee

You can enable Privileged Identity Management in the Azure portal by using an account that is a Global
Administrator of the target Azure AD tenant. After you enable Privileged Identity Management, you can
use the privileged identity management dashboard to monitor the number of users that are assigned
privileged roles, and the number of temporary or permanent administrators. The portal also includes
options to generate reports detailing administrator access history and to configure alerts triggered when a
privileged role is assigned.

Note: Azure Privileged Identity Management does not control or monitor the usage of
Service Administrator or co-Administrators of an Azure subscription.
MCT USE ONLY. STUDENT USE PROHIBITED
9-30 Implementing Azure Active Directory

Azure AD Identity Protection offers a comprehensive insight into the usage of privileged identities in your
Azure AD tenant. It continuously monitors usage patterns and uses adaptive machine learning to detect
unauthorized authentication attempts. It evaluates risk events and assigns risk levels for each user. This
allows you to configure risk-based policies that mitigate potential threats. For example, if there are two
consecutive sign-in attempts from two different parts of the world by using the same user account, a
policy can block that user or temporarily enforce multi-factor authentication.

Note: Azure AD Privileged Identity Management and Identity Protection require Azure AD
Premium P2.

Additional Reading: For more information regarding Azure AD Privileged Identity


Management and Identity Protection, refer to: https://aka.ms/Is724e

Question: Which features of Azure AD Premium would you consider to be most useful for
your organization?

Question: A. Datum requires that their applications use multi-factor authentication. The
company has implemented this technology in its on-premises infrastructure, and wants to
extend it for applications and resources that reside in Azure. A. Datum wants to use the
authentication methods that are similar to what they are currently using in the on-premises
infrastructure. Can A. Datum use Azure Multi-Factor Authentication for this, and if so, why?
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 9-31

Lab: Implementing Azure AD


Scenario
The IT department at Adatum Corporation currently uses AD DS, and a range of Active Directory–aware
applications. While preparing for synchronizing its AD DS to Azure AD, A. Datum wants you to test some
of the features of Azure AD. The company wants you to evaluate Azure AD control mechanisms that
restrict access to third-party SaaS apps by individual Azure AD users and groups. A. Datum also wants you
to configure SSO for these apps and protect them by using Multi-Factor Authentication.
In addition to these tasks, Adatum wants you to evaluate some of the advanced features Azure AD
Premium offers. In particular, you will need to test joining a Windows 10–based computer to an Azure AD
tenant to prepare for implementing this configuration on all the Windows 10–based computers in the
Research department.

Objectives
After completing this lab, you will be able to:
• Administer Azure AD.

• Configure SSO for Azure Marketplace apps.

• Configure multi-factor authentication for administrators.


• Use the advanced features offered by Azure AD Premium.

• Configure SSO from a Windows 10–based computer that is joined to Azure AD.

Note: The lab steps for this course change frequently due to updates to Microsoft Azure.
Microsoft Learning updates the lab steps frequently, so they are not available in this manual. Your
instructor will provide you with the lab documentation.

Lab Setup
Estimated Time: 60 minutes

Virtual machine: 20533E-MIA-CL1


User name: Student

Password: Pa55w.rd

Before you start this lab, ensure that you complete the tasks in the “Preparing the lab environment”
demonstration, which is in the first lesson of this module. Also, ensure that the setup script is complete.

Exercise 1: Administering Azure AD


Scenario
You want to test the functionality of Azure AD by first creating a new Azure directory and enabling the
Premium functionality. You then want to create some pilot users and groups in Azure AD. You plan to use
both the Azure portal and Microsoft Azure Active Directory module for Windows PowerShell.
MCT USE ONLY. STUDENT USE PROHIBITED
9-32 Implementing Azure Active Directory

Exercise 2: Configuring SSO


Scenario
A. Datum is planning to deploy cloud-based applications, and wants to implement SSO for these
applications. You will install and configure a test application, and then validate the SSO experience.

Exercise 3: Configuring Multi-Factor Authentication


Scenario
Because A. Datum requires users to use Multi-Factor Authentication, you will need to configure and test
Multi-Factor Authentication for Global Administrators.

Exercise 4: Configuring SSO from a Windows 10–based computer


Scenario
A. Datum has an increasing demand to provide its remote and mobile users, who are using Windows 10–
based devices, with secure access to the cloud resources. The company plans to join Windows 10 devices
to Azure AD in order to simplify access to cloud resources by leveraging SSO. You want to test this
functionality by joining a Windows 10–based computer to Azure AD.

Question: What is the major benefit of joining Windows 10–based devices to Azure AD?

Question: What is the requirement for Delegated Group Management in Azure AD?
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 9-33

Module Review and Takeaways


Review Question

Question: What would you consider to be primary differences between Azure AD and
AD DS?

Tools
• Azure Active Directory V2 PowerShell module. Provides necessary Windows PowerShell cmdlets for
user management, domain management and for configuring SSO: https://aka.ms/qqxznd

• Microsoft Azure Active Directory module for Windows PowerShell (64-bit version). An older version of
Azure AD module for Windows PowerShell. Its functionality overlaps to large extent with the
functionality provided by Azure Active Directory V2 PowerShell module, however, its device
management capabilities offer some unique capabilities (such as identifying devices registered by a
given user with a single cmdlet): http://aka.ms/Cuedhw

Best Practices
Use RBAC to provide users and groups with the ability to manage Azure resources based on their job
requirements.

Common Issues and Troubleshooting Tips


Common Issue Troubleshooting Tip

• You don't receive a text or voice call that


contains the verification code for Azure
Multi-Factor Authentication.
• "Sorry! We can't process your request"
error when you try to set up security
verification settings for Azure Multi-
Factor Authentication.
• Can't use Azure Multi-Factor
Authentication to sign in to cloud
services after you lose your phone or the
phone number changes.
• "We did not receive the expected
response" error message when you try to
sign in by using Azure Multi-Factor
Authentication.
• "Account verification system is having
trouble" error message when you try to
sign in by using a work or school
account.
MCT USE ONLY. STUDENT USE PROHIBITED
MCT USE ONLY. STUDENT USE PROHIBITED
10-1

Module 10
Managing Active Directory infrastructure in hybrid and
cloud only scenarios
Contents:
Module Overview 10-1

Lesson 1: Designing and implementing an Active Directory environment by


using Azure IaaS 10-2

Lesson 2: Implementing directory synchronization between AD DS and


Azure AD 10-8

Lesson 3: Implementing single sign-on in federated scenarios 10-28

Lab: Implementing and managing Azure AD synchronization 10-37

Module Review and Takeaways 10-38

Module Overview
You have several distinct choices for integrating Active Directory Domain Services (AD DS) with Microsoft
cloud technologies. These choices include:

• Deploying AD DS domain controllers on Microsoft Azure virtual machines (VMs).

• Implementing directory synchronization and optional password hash synchronization between AD DS


and Azure Active Directory (Azure AD). If you choose password hash synchronization, you can also
provide Seamless Single Sign-On (Seamless SSO).

• Implementing directory synchronization and pass-through authentication between AD DS and


Azure AD. You also have the option of implementing Seamless SSO.

• Implementing directory synchronization and federation between AD DS and Azure AD. This approach
automatically provides single sign-on.

In this module, you will learn about these options and their implementation.

Objectives
After completing this module, students will be able to:
• Implement an Active Directory environment by using Azure Infrastructure as a Service (IaaS)
resources.

• Synchronize objects between AD DS and Azure AD.

• Set up single sign-on in federated scenarios.


MCT USE ONLY. STUDENT USE PROHIBITED
10-2 Managing Active Directory infrastructure in hybrid and cloud only scenarios

Lesson 1
Designing and implementing an Active Directory
environment by using Azure IaaS
You can deploy one or more domain controllers on Azure VMs to provide authentication services for
workloads that depend on AD DS. These domain controllers operate as they would in an on-premises
environment. Their provisioning process also closely resembles the process you would follow in your own
datacenter. However, there are some differences due to the unique characteristics of Azure VMs and
related Azure IaaS resources. This lesson focuses on these unique characteristics.

Lesson Objectives
After completing this lesson, you will be able to:

• Prepare the lab environment for the remainder of this module.


• Describe the options for integrating AD DS and Azure IaaS.

• Plan the deployment of Active Directory domain controllers on Azure VMs.

• Implement Active Directory domain controllers on Azure VMs.

Demonstration: Preparing the lab environment


Perform the tasks in this demonstration to prepare the lab environment. The environment will be
configured while you progress through this module, learning about the Azure services that you will use in
the lab.

Important: Because the scripts in this course might delete objects that you have in your
subscriptions, you should use a new Azure subscription. You should also use a new Microsoft
account that is not associated with any other Azure subscription. This will eliminate the possibility
of any potential confusion when you run the setup scripts.

This course relies on custom Azure PowerShell modules, including Add-20533EEnvironment to prepare
the lab environment, and Remove-20533EEnvironment to perform clean-up tasks at the end of the
module.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 10-3

Overview of AD DS and Azure integration options


AD DS offers a wide range of business-related
and technological benefits. By design, its primary
purpose is to serve as an identity and access
management solution for on-premises,
independently managed, isolated environments,
and most of its characteristics reflect this
underlying premise. The authentication
mechanisms of AD DS rely largely on having
domain-member computers permanently joined
to the domain. The communication with domain
controllers involves protocols such as Lightweight
Directory Access Protocol (LDAP) for directory
services lookups, Kerberos for authentication, and Server Message Block (SMB) for Group Policy–based
interaction with AD DS domain controllers. None of these protocols is suitable for internet environments.

If you want to provide an equivalent functionality in Azure, you can deploy AD DS domain controllers as
Azure VMs within an Azure virtual network. You might use this type of deployment to build a disaster
recovery solution for an existing on-premises AD DS environment or to implement a test environment.
You could also use it to provide local authentication to AD DS–dependent workloads running on Azure
VMs on the same or a directly connected Azure virtual network.

Azure AD DS
If you need to deploy AD DS–dependent workloads in Azure, but you want to minimize the overhead
associated with deploying and managing Active Directory domain controllers hosted on Azure VMs, you
should consider implementing Azure AD DS instead. Azure AD DS is a Microsoft-managed AD DS service
that provides the standard Active Directory features such as Group Policy, domain join, and support for
protocols such as Kerberos, NTLM, and LDAP. You will learn about this solution in the second lesson of
this module.

Planning to deploy Active Directory domain controllers on Azure virtual


machines
Because Azure offers IaaS capabilities, you can
use Azure VMs to host domain controllers. This
allows you to implement an Active Directory
environment in the cloud. Hosting domain
controllers in Azure can provide benefits for a
variety of on-premises and cloud-based
workloads.

Some common reasons for placing domain


controllers in Azure include:
• Providing authentication to AD DS–
dependent applications and services within
the Azure environment.
MCT USE ONLY. STUDENT USE PROHIBITED
10-4 Managing Active Directory infrastructure in hybrid and cloud only scenarios

• Extending the scope of the on-premises AD DS to one or more Azure regions for disaster recovery
purposes.
• Implementing additional AD DS domain controllers in Azure to enhance the resiliency of the directory
synchronization with Azure AD and Azure AD–federated deployments.

Deployment scenarios
There are three main scenarios that involve AD DS and Azure VMs:

• AD DS deployed to Azure VMs without cross-premises connectivity. This deployment results in the
creation of a new forest, with all domain controllers residing in Azure. Use this approach to
implement Azure-resident workloads hosted on Azure VMs that rely on Kerberos authentication or
Group Policy but have no on-premises dependencies.

• Existing on-premises AD DS deployment with cross-premises connectivity to an Azure virtual network


where the Azure VMs reside. This scenario uses an existing on-premises Active Directory environment
to provide authentication for Azure VM–resident workloads. When considering this design, you
should take into account the latency associated with cross-premises network traffic.
• Existing on-premises AD DS deployment with cross-premises connectivity to an Azure virtual network
hosting additional domain controllers on Azure VMs. The primary objective of this scenario is to
optimize workload performance by localizing authentication traffic.

Planning for deploying Active Directory domain controllers in Azure


When planning the deployment of AD DS domain controllers to Azure VMs, you should consider the
following:

• Cross-premises connectivity. If you intend to extend your existing AD DS environment to Azure, then
a key design element is cross-premises connectivity between your on-premises environment and the
Azure virtual network. You must set up either a site-to-site virtual private network (VPN) or Microsoft
Azure ExpressRoute. For more information regarding this topic, refer to Module 2, “Implementing and
managing Azure networking.”

• Active Directory topology. In cross-premises scenarios, you should configure AD DS sites to reflect
your cross-premises network infrastructure. This will allow you to localize the authentication traffic
and control the replication traffic between on-premises and Azure VM–based domain controllers.
Intra-site replication assumes high bandwidth and permanently available connections. By contrast,
inter-site replication allows for scheduling and throttling replication traffic. In addition, a proper site
design ensures that domain controllers in a given site handle authentication requests originating from
that site.

• Read-only domain controllers (RODCs). Some customers are wary about deploying writeable domain
controllers to Azure VMs, due to security concerns. One way to mitigate this concern is to deploy
RODCs instead. RODCs and writeable domain controllers provide similar user experiences. However,
RODCs lower the volume of egress traffic and the corresponding charges. This is a good option if an
Azure-resident workload does not require frequent write access to AD DS.
• Global catalog placement. Regardless of your domain topology, you should configure all your Azure
VM–based domain controllers as global catalog servers. This arrangement prevents global catalog
lookups from traversing cross-premises network links, which would negatively affect performance and
result in egress network traffic charges.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 10-5

Implementing Active Directory domain controllers on Azure VMs


When deploying AD DS to Azure VMs, you can
install it either as an additional domain controller
in an existing on-premises Active Directory forest
or as the first domain controller in a new Active
Directory forest. The two scenarios have similar
requirements. The primary difference is that the
first scenario requires a cross-premises
connection through a site-to-site VPN or
ExpressRoute.

Install an additional Active Directory


domain controller in an Azure VM
To implement an additional domain controller in
an existing forest on an Azure VM:

1. Create an Azure virtual network with cross-premises connectivity.

2. Create an Azure Storage account.

Note: If you decide to use managed disks for the operating system and data disks on the
Azure VM, you do not have to create a storage account, unless you want to collect Azure VM
diagnostics.

3. Create an Azure VM and assign it a static IP address.

4. Install the AD DS and Doman Name System (DNS) server roles in the operating system of the
Azure VM.

Note: You can use a different DNS solution, but AD DS–integrated DNS is the most
common choice.

The following sections explain these steps in detail.

Create an Azure virtual network with cross-premises connectivity


When you create an Azure virtual network in this scenario, you need to specify:

• The name of the virtual network.

• An IP address space that does not overlap with the IP address space of your on-premises network.

• One or more subnets within the virtual network, with the IP address ranges within its IP address space.

• The DNS server settings that point to one or more of your on-premises DNS servers.

In addition, you need to provision cross-premises connectivity, either through a site-to-site VPN or
ExpressRoute. For details regarding these procedures, refer to Module 2, “Implementing and managing
Azure networking.”
MCT USE ONLY. STUDENT USE PROHIBITED
10-6 Managing Active Directory infrastructure in hybrid and cloud only scenarios

Create an Azure Storage account


If you are not using managed disks, you need a storage account to host the virtual hard disks of the Azure
VM operating as an additional AD DS domain controller. You can create a storage account as a separate
step or, if you are using the Azure portal, you can create one when you deploy the Azure VM. If you are
using managed disks, you should consider creating a storage account to host Azure VM diagnostics.
Regardless of the type of disks you use, you should ensure that you allocate a separate data disk or disks
for the Active Directory database, log files, and SYSVOL. For details about managed and unmanaged disks,
refer to Module 3, “Implementing Azure VMs.”

Create an Azure VM and assign an IP address


Next, you need to create an Azure VM with a static IP address on one of the virtual network subnets. For
this purpose, you can use any of the methods described in Module 3, “Implementing Azure VMs.” You
also need to attach virtual disks that will host the database, logs, and SYSVOL files. Make sure to set
caching to None on the data and log disks.

Note: Choose the virtual machine size with sufficient memory to fully cache the entire
AD DS database. This should considerably improve its performance.

Install and configure DNS and AD DS server roles


To promote the server to a domain controller, you need to add the AD DS server role. You can accomplish
this by using Add Roles and Features in Server Manager or by running the following Windows
PowerShell cmdlet:

Install-WindowsFeature ADDS-Domain-Controller

In addition, add the DNS server role. You can install it by using Add Roles and Features in Server
Manager or by running the following Windows PowerShell cmdlet:

Install-WindowsFeature DNS

After the server role installation completes, promote the server running Windows Server to a domain
controller. After the new domain controller is fully operational, update the DNS server settings of the
Azure virtual network to point to the static IP address you assigned to the Azure VM. These settings will
apply automatically to every new Azure VM you deploy to the same virtual network.

Note: To ensure resiliency and to qualify for the service level agreement (SLA), consider
deploying the Azure VM into an availability set or an availability zone. After the deployment is
complete, deploy another VM into the same availability set or availability zone and configure it as
an additional domain controller in the same domain as the first Azure VM.

Install a new Active Directory forest on an Azure virtual network


To implement a new Active Directory forest in Azure, perform the following steps:

1. Create an Azure virtual network by specifying:

o The name of the virtual network.

o An IP address space.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 10-7

o One or more subnets within the virtual network, with the IP address ranges within its IP address
space.
o The DNS server addresses that point to the IP address you will assign to the Azure VM that will
host the AD DS domain controller.

2. Create a storage account. Be sure to follow the guidance about storage accounts provided earlier in
this topic.

3. Deploy an Azure VM to host the domain controller and DNS server roles.

4. Install the AD DS and DNS server roles.


To avoid direct access to the domain controller from the internet, do not assign a public IP address to its
network adapter. Instead, consider deploying another Azure VM running Windows Server on the same
virtual network and configuring it as a jump host. By assigning a public IP address to that virtual machine,
you will be able to connect to it by using Remote Desktop Protocol (RDP). From the RDP session, you can
manage the AD DS domain controller on the first virtual machine.

Check Your Knowledge


Question

How should you configure caching on Azure virtual machines hosting AD DS


domain controllers?

Select the correct answer.

Set the caching to None on the disks hosting the database, SYSVOL, and log
files.

Set the caching to ReadOnly on the disks hosting the database, SYSVOL, and
log files.

Set the caching to ReadWrite on the disks hosting the database, SYSVOL, and
log files.

Set the caching to ReadWrite on the disks hosting the database and SYSVOL
files, and set it to None for the disk hosting log files.

Set the caching to ReadWrite on the disks hosting the database and SYSVOL
files, and set it to ReadOnly for the disk hosting log files.
MCT USE ONLY. STUDENT USE PROHIBITED
10-8 Managing Active Directory infrastructure in hybrid and cloud only scenarios

Lesson 2
Implementing directory synchronization between AD DS
and Azure AD
Azure AD supports integration with AD DS, which considerably simplifies the management of identities in
hybrid environments. This integration relies on synchronization between AD DS and Azure AD. This lesson
describes the principles of this synchronization, its implementation by using Azure AD Connect, and its
monitoring by using Azure AD Connect Health. It also provides an overview of Azure AD DS, which offers
managed AD DS for Azure VM–resident workloads. Azure AD DS automatically synchronizes its content
with Azure AD.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe directory synchronization.

• Compare the different directory synchronization options.

• Identify the directory synchronization option that is most beneficial in a given scenario.
• Prepare on-premises Active Directory for directory synchronization.

• Describe installation and configuration of Azure AD Connect.

• Manage and monitor directory synchronization.

• Implement Azure AD Domain Services.

• Implement directory synchronization by using Azure AD Connect.

Overview of directory synchronization


Directory synchronization involves copying
selected user, group, contact, and device objects
and their attributes between on-premises Active
Directory and Azure AD. In its simplest form, you
install a directory synchronization component on
a server with direct connectivity to your AD DS
domain controllers, provide credentials of an AD
DS user with Enterprise Admin privileges and an
Azure AD user with Global Admin privileges, and
then let the directory synchronization
component run. After the initial synchronization
completes, AD DS objects within the scope of
synchronization will automatically appear in Azure AD. By default, the synchronization process includes
password hashes. This way, if the user names in both identity stores match, AD DS users can authenticate
to Azure AD by using the same credentials as those they use to sign in to their on-premises computers.
This mechanism is known as same sign-on and requires that users provide their credentials the first time
they authenticate to Azure AD. Alternatively, you can implement single sign-on, which relies on either
Seamless SSO or federation between AD DS and Azure AD to provide access to Azure resources without
the need to reauthenticate.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 10-9

Azure AD Connect
To implement synchronization, use Azure AD Connect. This tool automatically synchronizes objects—such
as users, groups, devices, and contacts—and their attributes from on-premises AD DS to Azure AD. The
synchronization includes the user principal name (UPN) attribute, which usually matches the name that
Active Directory users use to sign in to their on-premises computers. Matching the UPN across the two
environments simplifies the sign-in experience because users can use the same user name when they
authenticate to access cloud services. In addition, synchronizing password hashes results in matching
credentials in both directories.

Note: To be able to use the same name to sign in to AD DS and Azure AD, the DNS domain
names of Azure AD and AD DS must match. This, in turn, requires configuring and validating a
custom DNS domain name in the Azure AD tenant with which the on-premises Active Directory
synchronizes.

Note: When configuring Azure AD Connect synchronization settings, you must decide
which attribute will serve as the user name of user accounts that the synchronization process will
generate in Azure AD. The default and most common choice is the user principal name. In
addition, you must decide which attribute will serve as sourceAnchor, also known as
immutableId. Its purpose is to form a persistent, logical link between an AD DS user account and
its counterpart in Azure AD. Your choice is important, because the value of this attribute should
remain constant for the entire lifetime of a user account. Traditionally, the most common choice
for this attribute was objectGUID. However, there are two potential problems with using this
attribute:

• Cross–AD DS forest migration of user accounts results in a new objectGUID.

• AD DS generates the value of objectGUID. It is not possible to set it to an arbitrary value.


For these reasons, starting with Azure AD Connect 1.1.524.0, you can use msDS-
ConsistencyGuid as sourceAnchor. If its value is not set, Azure AD Connect sets its value to
objectGUID prior to synchronization. This value remains the same if you migrate a user account
to another AD DS forest. It is also possible to set it to an arbitrary value.

Azure AD Connect provides a wide range of capabilities, including:


• Support for multiple forest scenarios.

• Filtering based on the domain, organizational unit, and individual object attribute.

• Synchronization of password hashes to Azure AD.


Azure AD Connect provides an installation wizard that allows you to specify the Active Directory
implementation that matches your environment and the integration settings that match your
requirements. For example, you can synchronize a single or multiple forests, choose between password
synchronization or federation, and enable password reset write-back or device write-back. The wizard
automatically applies all specified settings.

Azure AD Connect incorporates three components that support the following features:

• Synchronization. This is the primary component of Azure AD Connect responsible for synchronizing
users, groups, contacts, and device objects. Its functionality relies on AD DS and Azure AD connectors
that handle communication with their respective identity providers. This communication facilitates
regular updates to object attributes within the scope of synchronization.
MCT USE ONLY. STUDENT USE PROHIBITED
10-10 Managing Active Directory infrastructure in hybrid and cloud only scenarios

• Active Directory Federation Services (AD FS). This component provides the functionality necessary to
implement federation between AD DS and Azure AD by using the Windows Server AD FS server role.
Implementing federation eliminates the requirement for password hash synchronization in single
sign-on scenarios.
• Health monitoring. Azure AD Connect Health monitors the status of your Azure AD Connect
deployment.

Comparing Azure AD integration scenarios


When implementing Azure AD Connect, you can
choose from the following integration scenarios:

• Directory synchronization

• Directory synchronization with password


hash synchronization (same sign-on)
• Directory synchronization with password
hash synchronization and Seamless Single
Sign-On (SSO)
• Directory synchronization with Pass-through
Authentication and same sign-on

• Directory synchronization with Pass-through Authentication and Seamless SSO

• Directory synchronization with federation (single sign-on)

Directory synchronization
In this scenario, directory synchronization synchronizes AD DS objects to Azure AD, including a number of
user attributes, but without user password hashes. Any changes to Active Directory users’ passwords do
not affect passwords of the corresponding Azure AD user objects. This might lead to confusion, because
the passwords that users must provide depend on the resources that they are attempting to access. This
can result in an increased number of help desk calls.

Note: If you intend to implement this scenario, do not select any options on the User sign-
in page when installing Azure AD Connect.

Directory synchronization with password hash synchronization (same sign-on)


In this scenario, directory synchronization synchronizes attributes of user accounts, including their
password hashes, to Azure AD. This method ensures that passwords for users in the scope of
synchronization are the same in Azure AD and in on-premises AD DS. This eliminates the problem
associated with the first scenario, although users typically need to provide their password twice.
Users must specify the passwords of their Azure AD user accounts during their initial attempt to access
Azure AD–authenticated resources. The sign-in process converts the user’s password into a hash and
passes it to Azure AD. Azure AD compares the hash with the one stored in its local data store. If these two
match, the authentication attempt succeeds.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 10-11

The authentication prompt typically includes the option to save the user name and corresponding
password in the user’s credential store so that subsequent authentication attempts do not trigger a
prompt. While this simplifies subsequent authentication attempts, it is an example of same sign-on, not
single sign-on. The user authenticates separately against two distinct directory services, even though their
respective credentials match. However, for many organizations, the simplicity of this solution compensates
for the lack of true single sign-on.

Note: If you intend to implement this scenario, select the Password Synchronization
option on the User sign-in page when installing Azure AD Connect.

Note: The same sign-on and single sign-on solutions require that the DNS domain names
in AD DS and Azure AD match.

Directory synchronization with password hash synchronization and Seamless SSO


As in all the other scenarios, directory synchronization ensures that matching user account attributes exist
in Active Directory and Azure AD. However, in this case, Azure AD not only synchronizes users’ password
hashes, but also relies on several dedicated Active Directory objects to communicate with Active Directory
securely and process its authentication tokens. These objects include a computer account named
AZUREADSSOACCT and autologon.microsoftazuread-sso.com and aadg.windows.net.nsatc.net
service principal names (SPNs) in the Active Directory domain that you configure for synchronization.
To enable this option, select the Password Synchronization and the Enable single sign-on options on
the User sign-in page when installing Azure AD Connect. The installation process will then include the
following additional tasks:

1. Create a new computer account AZUREADSSOACCT in the source Active Directory domain.

2. Store the computer account’s Kerberos decryption key in the target Azure AD tenant.

3. Associate the autologon.microsoftazuread-sso.com and aadg.windows.net.nsatc.net SPNs with


the AZUREADSSOACCT computer account in the Active Directory domain.

Note: You must configure the two SPNs to be part of the intranet zone for the web
browsers of Active Directory users. You can apply this configuration by using Group Policy.
Implementation details depend on the type of web browser.

With these changes in place, users who successfully sign in to their Active Directory domain–based client
computers will be able to authenticate to cloud-based resources without providing their passwords. Azure
AD relies on the AZUREADSSOACCT computer account to facilitate secure communication with the
Active Directory domain of the authenticating user. This communication includes forwarding of the user’s
Kerberos ticket, which Azure AD decrypts to verify whether Active Directory successfully authenticated
that user.

Additional Reading: For more information, refer to: “Azure AD Seamless Single Sign-On”
at: https://aka.ms/wz4wvq

This scenario supports single sign-on to cloud applications via web browsers and from Microsoft Office
programs that support modern authentication. This includes Office 2013 and newer versions.
MCT USE ONLY. STUDENT USE PROHIBITED
10-12 Managing Active Directory infrastructure in hybrid and cloud only scenarios

Directory synchronization with pass-through authentication and same sign-on


This scenario facilitates same sign-on while also eliminating the need to synchronize password hashes.
Instead, when a user attempts to access a cloud-based resource, Azure AD passes the user’s password
through to AD DS for verification. To accomplish this, Azure AD relies on an agent running on an on-
premises computer running Windows Server that retrieves authentication requests and relays them to an
AD DS domain controller.

To implement this scenario, select the Pass-through authentication option on the User sign-in page
when installing Azure AD Connect. After the installation completes, you will need to perform the following
additional tasks:

1. Download the AADApplicationProxyConnectorInstaller.exe Authentication Agent installer from


https://aka.ms/ri8d07. Install it on one or more on-premises servers with direct connectivity to Active
Directory domain controllers and connectivity to Azure AD on TCP ports 80 and 443. To install the
agent, run the following from an elevated Windows PowerShell prompt:

AADApplicationProxyConnectorInstaller.exe REGISTERCONNECTOR="false" /q

2. Register each instance of the Authentication Agent with your Azure AD tenant. At the
Windows PowerShell prompt, change the current directory to the C:\Program Files
\Microsoft AAD App Proxy Connector folder, and then run the following command:

.\RegisterConnector.ps1 -modulePath "C:\Program Files\Microsoft AAD App Proxy


Connector\Modules\" -ModuleName "AppProxyPSModule" -Feature PassthroughAuthentication

When prompted, provide the credentials of a Global Administrator account of your Azure AD tenant.

By implementing Azure AD pass-through authentication, you provide the same sign-on user experience,
but without the need to synchronize password hashes to Azure AD. Some organizations prefer this option
because they are reluctant to store copies of users’ password hashes outside their on-premises Active
Directory.
This scenario supports same sign-on to cloud applications from on-premises Active Directory–joined
computers and Azure AD–joined computers. Users must access these applications either from a web
browser or from Office 365 client applications that support modern authentication, such as Office 2013
and newer.

Additional Reading: For more information about pass-through authentication, refer to:
“User sign-in with Azure Active Directory Pass-through Authentication” at: https://aka.ms/e6w1t5

Directory synchronization with pass-through authentication and Seamless SSO


This scenario combines the benefits of pass-through authentication, which eliminates the need to
synchronize passwords hashes to Azure AD, with the benefits of Seamless SSO, which eliminates the need
to provide a password when authenticating to Azure AD. This delivers a user experience similar to
federated single sign-on, but without the need for additional, dedicated federation infrastructure. On the
other hand, this scenario lacks some features of federated single sign-on, such as support for custom
claims or non-Microsoft multi-factor authentication.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 10-13

To implement this scenario, select the Pass-through authentication and Enable single sign-on options
on the User sign-in page when installing Azure AD Connect. In addition, you must perform the following
post-installation steps:

1. Download the AADApplicationProxyConnectorInstaller.exe Authentication Agent installer from


https://aka.ms/ri8d07

2. Install it on one or more on-premises servers with direct connectivity to Active Directory domain
controllers and connectivity to Azure AD on TCP ports 80 and 443. To install the agent, run the
following from an elevated Windows PowerShell prompt:

AADApplicationProxyConnectorInstaller.exe REGISTERCONNECTOR="false" /q

3. Register each instance of the Authentication Agent with your Azure AD tenant. At the Windows
PowerShell prompt, change the current directory at the Windows PowerShell prompt to the
C:\Program Files\Microsoft AAD App Proxy Connector folder, and then run the following
command:

.\RegisterConnector.ps1 -modulePath "C:\Program Files\Microsoft AAD App Proxy


Connector\Modules\" -ModuleName "AppProxyPSModule" -Feature PassthroughAuthentication

4. Configure autologon.microsoftazuread-sso.com and aadg.windows.net.nsatc.net SPNs to be


part of the intranet zone for the web browsers of all Active Directory users. You can apply this
configuration by using Group Policy. Implementation details depend on the type of web browser.
This scenario supports Seamless SSO when accessing cloud applications from on-premises Active
Directory–joined computers and Azure AD–joined computers. Users must access these applications either
from a web browser or from Office 365 client applications that support modern authentication.

Note: Azure AD pass-through authentication automatically enables a feature called Smart


Lockout. It protects AD DS and Azure AD identities from brute force attacks and prevents account
lockouts resulting from these attacks. With Smart Lockout in place, Azure AD keeps track of failed
sign-in attempts. If these attempts reach the value of the Lockout Threshold property before
the amount of time specified in the Lockout Counter After property passes, Azure AD rejects
any subsequent sign-in attempts for the duration of the lockout. You can retrieve and modify the
values of the Lockout Threshold, Lockout Counter After, and Lockout Duration Azure AD
properties by using the Graph application programming interface (API). Modifying these values
requires Azure AD Premium P2.
You should ensure that the value of the Azure AD Lockout Threshold property is smaller than
the value of the AD DS Lockout Threshold property. Conversely, you should ensure that the
value of the Azure AD Lockout Duration property is larger than the value of the AD DS Lockout
Duration property.

Additional Reading: The Graph API provides programmatic access to Azure AD via REST
API endpoints. For more information, refer to: "Microsoft Graph or the Azure AD Graph” at:
https://aka.ms/gxb1ch

Additional Reading: For more information about Smart Lockout, refer to: “Azure Active
Directory Pass-through Authentication: Smart Lockout” at: https://aka.ms/o3akoi
MCT USE ONLY. STUDENT USE PROHIBITED
10-14 Managing Active Directory infrastructure in hybrid and cloud only scenarios

Directory synchronization with federation (single sign-on)


As in all other scenarios presented in this topic, directory synchronization synchronizes user account
information to Azure AD. Azure AD uses the synchronized information to identify authenticating users
and redirect their requests to a security token service (STS), such as AD FS. The STS contacts AD DS to
perform authentication and, if the attempt is successful, it returns the corresponding token to Azure AD.
Users need to authenticate only once during the initial sign-in to their domain-joined computers, even
when accessing cloud-based resources.

SSO relies on a federated trust between Azure AD and AD DS. This trust enables users to authenticate to
obtain access to cloud applications and resources by using their AD DS credentials.

Azure AD Connect supports a range of federation solutions. However, it is particularly helpful when using
AD FS because Azure AD Connect includes a wizard that guides you through deployment and
configuration of AD FS, automating most of the intermediary tasks.

It is important to understand that, by default, if AD FS becomes unavailable, users will not be able to
authenticate when accessing cloud-based resources. Deploying a reliable and highly available federation
infrastructure requires more resources and management than other scenarios described above.

Feature comparison
The following table lists the features that each Azure AD integration option supports.

Directory Directory
Directory Directory
synchroniza synchroniza
synchroniza synchroniza
tion with tion with Directory
tion with tion with
Directory pass- pass- synchroniza
password password
Feature synchroniza through through tion with
hash synchroniza
tion only authenticat authenticat federation
synchroniza tion and
ion and ion and (SSO)
tion (same Seamless
same sign- Seamless
sign-on) SSO
on SSO

Sync Yes Yes Yes Yes Yes Yes


users,
groups,
and
contacts
to Azure

Sync Yes Yes Yes No No No


password
hashes to
Azure

Enable Yes, limited Yes, limited Yes (web Yes (web Yes (web Yes, full
hybrid integration integration browsers browsers browsers support
Office 365 and and and
scenarios modern modern modern
authenticat authenticat authenticat
ion apps) ion apps) ion apps)

Users can No Yes Yes Yes Yes Yes


sign in
with
Active
Directory
credential
s
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 10-15

Directory Directory
Directory Directory
synchroniza synchroniza
synchroniza synchroniza
tion with tion with Directory
tion with tion with
Directory pass- pass- synchroniza
password password
Feature synchroniza through through tion with
hash synchroniza
tion only authenticat authenticat federation
synchroniza tion and
ion and ion and (SSO)
tion (same Seamless
same sign- Seamless
sign-on) SSO
on SSO

Reduce No Yes Yes Yes Yes Yes


password
administra
tion costs

Control No Yes Yes Yes Yes Yes


password
policies
from
AD DS

Enable Yes Yes Yes Yes Yes Yes


Azure
Multi-
Factor
Authenti-
cation

Enable No No No No No Yes
on-
premises
multi-
factor
authenti-
cation

Authenti- No No No Yes Yes Yes


cate
against
AD DS

Imple- No No Yes No Yes Yes


ment SSO
with
Active
Directory
creden-
tials

Federa- No No No No No Yes
tion
infrastruc-
ture
MCT USE ONLY. STUDENT USE PROHIBITED
10-16 Managing Active Directory infrastructure in hybrid and cloud only scenarios

Discussion: Which directory synchronization option would be optimal for


your organization?
Discuss which directory synchronization option
would be most appropriate for your
organization. Use the table from the previous
topic to identify which features you might need.

Preparing on-premises Active


Directory for directory synchronization
When you prepare for directory synchronization,
you should consider a range of factors. The
following sections describe these considerations
in detail.

Review domain controller requirements


To work with Azure AD Connect, domain and
forest functional levels must be Windows Server
2003 or later. For the password write-back
feature, domain controllers must be running at
least Windows Server 2008 service pack 2 (SP2).

Review Azure AD Connect computer


requirements
The computer that is running Azure AD Connect must be running Windows Server 2008 SP2 or newer,
and it must have the latest hotfixes and updates. To implement password synchronization, you must use
Windows Server 2008 R2 SP1 or newer. For express settings, the computer must be a domain member
server or a domain controller, but for custom setting installation, the computer can belong to a
workgroup. If you plan to use Azure AD Connect with AD FS, servers where AD FS and Web Application
Proxy are deployed must be running Windows Server 2012 R2 or later.
In addition, Azure AD Connect requires Microsoft .NET Framework 4.5.1 or later and Windows PowerShell
3.0 or later. For deploying AD FS and Web Application Proxy, you must enable Windows Remote
Management on the servers where you will install these components.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 10-17

Review hardware recommendations


The following table provides guidance on hardware sizing based on the number of objects in AD DS.

Number of objects in Central processing


Memory Hard disk size
AD DS unit (CPU)

Fewer than 10,000 1.6 gigahertz (GHz) 4 GB 70 GB

10,000–50,000 1.6 GHz 4 GB 70 GB

50,000–100,000 1.6 GHz 16 GB 100 GB

100,000–300,000 1.6 GHz 32 GB 300 GB

300,000–600,000 1.6 GHz 32 GB 450 GB

More than 600,000 1.6 GHz 32 GB 500 GB

Review accounts and required permissions


Installing and configuring Azure AD Connect requires the following accounts:
• An Azure AD work or school account with the Global Administrator role. Create this account in the
Azure AD tenant that you plan to integrate with AD DS.
• An on-premises AD DS account. Required privileges depend on whether you choose the express or
custom installation settings. With the express installation settings, you must use an account that is a
member of the Enterprise Admins group in the AD DS forest you plan to synchronize with Azure AD.
This account is responsible for creating the synchronization user account in AD DS and granting it the
necessary permissions to perform read and write operations during synchronization. With custom
installation settings, you can precreate the synchronization user account with the appropriate level of
permissions.

Additional Reading: For more information about Azure AD Connect synchronization


features and their requirements, refer to: “Azure AD Connect: Accounts and permissions” at:
https://aka.ms/f4bysk

Azure AD Connect uses an Azure Global Administrator account to implement directory integration and
create the Azure AD service account. This account will provision and update Azure AD objects when the
Azure AD Connect setup wizard runs. The name of the Azure AD service account has the prefix Sync_,
followed by the name of the server that is hosting Azure AD Connect and a random string of characters.
Directory synchronization process creates an AAD_id user account in the Users container of the root
domain of a synchronized forest. This is the account for the synchronization engine running as the
Microsoft Azure AD Sync service on the server where you installed the Azure AD Connect software,
assuming you used a domain-member server for this purpose. The account has a randomly generated
complex password configured to never expire. When the directory synchronization service runs, it uses the
service account to read attributes of Active Directory objects.

Review network connectivity requirements


Synchronization with Azure AD occurs over Secure Sockets Layer (SSL). This synchronization is outbound,
with Azure AD Connect initiating it via TCP port 443. Internal network communication uses standard
Active Directory–related ports.
MCT USE ONLY. STUDENT USE PROHIBITED
10-18 Managing Active Directory infrastructure in hybrid and cloud only scenarios

If the computer running Azure AD Connect resides behind a firewall, the firewall should allow
communication via the protocols and ports listed in the following table.

Service Protocol Port

LDAP TCP/User Datagram Protocol 389


(UDP)

Kerberos TCP/UDP 88

DNS TCP/UDP 53

Kerberos change password TCP/UDP 464

Remote procedure call (RPC) TCP 135

RPC randomly allocated high- TCP 1024–65535


TCP ports 49152–65535

SMB TCP 445

SSL TCP 443

Microsoft SQL Server TCP 1433

Review certificate requirements


All AD FS servers must use the same HTTPS certificate. The AD FS configuration, including the SSL
certificate thumbprint, replicates through a Windows Internal Database (WID) or through a SQL Server
database across all the members of the AD FS server farm. You need to use a certificate that you obtain
from a public certification authority (CA).

Review Azure AD Connect supporting components


Azure AD Connect installs the following components on the server:

• Microsoft SQL Server 2012 Command Line Utilities

• SQL Server 2012 Native Client

• SQL Server 2012 Express LocalDB

• Microsoft Online Services Sign-In Assistant for IT Professionals

• Microsoft Visual C++ 2013 Redistributable Package

If you specify during setup that you will use an existing SQL Server instance, the setup process excludes
the SQL Server 2012 Express LocalDB from the list of components to install.

Review UPN requirements


Azure AD Connect automatically assigns the UPN suffix to AD DS user accounts synchronized to Azure
AD. If you want to implement same sign-on or single sign-on, you must ensure that the values of the UPN
attribute of Azure AD users match the values of the UPN attribute of the corresponding AD DS users. To
accomplish this, you must add the domain name matching the UPN suffix to your Azure AD tenant and
verify its ownership. For example, if your organization uses @contoso.com as its AD DS UPN suffix, you
need to add and verify contoso.com as a domain name in Azure AD. This ensures that
userb@contoso.com in the on-premises AD DS maps to the userb@contoso.com account in Azure AD
after you enable directory synchronization.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 10-19

If your on-premises AD DS domain uses a UPN that is not routable, such as Contoso.local, you must
replace this UPN with a publicly resolvable DNS name that matches a verified domain in your Azure AD
tenant. Otherwise, synchronization will generate Azure AD user accounts with the name in the format
@usernamedomain.onmicrosoft.com, where usernamedomain is unique per Azure AD tenant. To maintain
the naming convention that references the name of your organization, you should ensure that you have
UPNs for AD DS users set up correctly, with the matching domains added to Azure AD, before you
synchronize.

Prepare AD DS
Before deploying Azure AD Connect, it is essential that you review and remediate any issues in the on-
premises Active Directory. Your review should include identifying:

• Invalid characters in attribute values

• Non-unique attributes

• Schema extensions

During your review, remember the following requirements and rules applicable to invalid characters.

Attribute Characters Requirements Invalid characters

proxyAddress 256 Must be unique )(;><][\

sAMAccountName 20 !#$%^&{}\{`~"/
[]:@<>+=;?*

givenName 64 ?@\+

surname 64 ?@\+

displayName 256 ?@\+

Mail 256 Must be unique [!#$%&*+/=?^`


{}]

mailNickname 64 "\[]:><;

userPrincipalName 64/256 • Must be unique in the }{#‗$%~*+)(><!/


forest \=?`
• @ character must
exist
• Must not include a
space, end in a space,
a period, &, or @
• Must be internet
routable

After you complete your review, perform the following remediation tasks:

• Remove duplicate proxyAddress and userPrincipalName attributes.


• Update blank and invalid userPrincipalName attributes, and replace with valid userPrincipalName
attributes.
• Remove invalid characters in the following attributes: givenName, surname, sAMAccountName,
displayName, mail, proxyAddress, mailNickname, and userPrincipalName.
MCT USE ONLY. STUDENT USE PROHIBITED
10-20 Managing Active Directory infrastructure in hybrid and cloud only scenarios

UPNs for same sign-on and single sign-on can contain letters, numbers, periods, dashes, and underscores.
No other characters are allowed.

IdFix
The IdFix tool, available from Microsoft Download Center, enables you to identify and remediate most
object synchronization issues in AD DS, including common ones such as duplicate or malformed
proxyAddress and userPrincipalName attributes. You can limit its scope to individual organizational
units (OUs).

Installing and configuring Azure AD Connect


You can install the Azure AD Connect tool by
using express setup, which is sufficient for simple
integration scenarios. Alternatively, you can use
custom setup if you have more complex
requirements.

Additional Reading: Microsoft Azure


Active Directory Connect is available from
Microsoft Downloads at: http://aka.ms/Jlpj42

Azure AD Connect express setup


You can use Azure AD Connect express setup to implement directory synchronization with password
synchronization for a single Active Directory forest. Express setup creates a LocalDB instance, which is a
lightweight version of SQL Express. Express setup offers the option to enable Exchange hybrid
deployment, which will configure the required attribute write-back option.

Azure AD Connect with express settings will:

1. Install the synchronization engine.


2. Configure the Azure AD connector.

3. Configure the on-premises AD DS connector.

4. Enable password synchronization.

5. Configure synchronization services.

6. Configure synchronization services for an Exchange hybrid deployment (optional).

To install Azure AD Connect by using the express settings, perform the following steps:

1. Sign in to the server on which you wish to install Azure AD Connect by using an account with local
administrative privileges.

2. Run the Microsoft Azure AD Connect Setup program (AzureADConnect.msi).

3. On the Welcome to Azure AD Connect page, select I agree to the license terms and privacy
notice, and then click Continue.

4. On the Express Settings page, click Use express settings.


5. On the Connect to Azure AD page, type the user name and password of an Azure AD account with
the Global Administrator role, and then click Next.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 10-21

6. On the Connect to AD DS page, type the user name and password of an AD DS Enterprise
Administrator account, and then click Next.
7. On the Azure AD sign-in configuration page, identify whether your Azure AD tenant has an existing
verified domain with a name that matches the Active Directory UPN suffix. If it does not, but you
want to proceed with the installation, select the Continue without any verified domain check box.
At this point, you can also specify the on-premises attribute that you want to use as the Azure AD
user name. By default, Azure AD Connect uses userPrincipalName for this purpose.

Note: If you decide to proceed without the matching verified domain name, the following
message appears: Users will not be able to sign-in to Azure AD using their existing on-
premises credentials.

8. On the Ready to configure page, review the settings, and then click Install.

9. On the Configuration complete page, click Exit.

Custom installation of Azure AD Connect


You will need to use the Azure AD Connect custom setup for more complex scenarios. These scenarios
include modifying the scope of synchronization, merging identities from multiple AD DS forests, and AD
FS deployments. To install Azure AD Connect with password synchronization by using custom settings in a
single AD DS forest environment, perform the followings steps:
1. Sign in to the server on which you wish to install Azure AD Connect by using an account with local
administrative privileges.

2. Run the Microsoft Azure AD Connect Setup program (AzureAdConnect.msi).


3. On the Welcome to Azure AD Connect page, select I agree to the license terms and privacy
notice, and then click Continue.

4. On the Express Settings page, click Customize.

5. On the Install required components page, you can optionally select one of the following:
o Specify a custom installation location. You can specify a different location to install Azure AD
Connect.

o Use an existing SQL Server. This allows you to use an existing SQL Server instance. Selecting this
option automatically excludes SQL Server Express LocalDB from the list of components to install.

o Use an existing service account. You must use this option when using a remote SQL Server
instance or when your internet proxy requires authentication. By default, Azure AD Connect
generates and uses a virtual service account for its synchronization services.

o Specify custom sync groups. By default, Azure AD Connect creates four local admin groups on
the server where you install it. They allow you to delegate synchronization-related administrative
tasks to others. You can pre-create your own groups instead and specify them at this point.

6. Click Install.

7. On the User sign-in page, select one of the following:

o Password Synchronization. This option synchronizes user password hashes to Azure AD.

o Pass-through authentication. This option synchronizes user password hashes to Azure AD and
allows you to configure your on-premises Active Directory forest to accept passwords hashes
from Azure AD during user authentication.
MCT USE ONLY. STUDENT USE PROHIBITED
10-22 Managing Active Directory infrastructure in hybrid and cloud only scenarios

o Federation with AD FS. This option assists you with deployment of the AD FS infrastructure after
the setup of the synchronization component completes.
o Do not configure. Select this option if you already have an existing federation solution in place.

o Enable single sign-on. Select this option if you want to combine Password Synchronization or
Pass-through authentication with Seamless SSO.

8. On the Connect to Azure AD page, type the user name and password of an Azure AD Global
Administrator account, and then click Next.
9. On the Connect your directories page, specify the Active Directory forest, and then click Add
Directory. When prompted, in the AD Forest Account window, ensure that the Use existing
account option is selected, and then enter the name of a pre-created account that you intend to use
for synchronization. Alternatively, you can select the Create new account option, provide the
credentials of an AD DS Enterprise Administrator account, and then click OK. This will create the
synchronization account for you.

10. On the Connect your directories page, click Next.

11. On the Azure AD sign-in configuration page, identify whether your Azure AD tenant has an existing
verified domain with a name that matches the Active Directory UPN suffix. If it does not, and you
want to proceed with the installation, select the Continue without any verified domain check box.
At this point, you also can specify which on-premises attribute you want to use as the Azure AD user
name. By default, Azure AD Connect uses userPrincipalName for this purpose.

Note: If you decide to proceed without the matching verified domain name, the following
message appears: “Users will not be able to sign-in to Azure AD using their existing on-premises
credentials.”

12. On the Domain and OU filtering page, specify which domains and organizational units to
synchronize and then click Next.
13. On the Uniquely identifying your users page, select the default Users are represented only once
across all directories option, and then click Next.

Note: On the Uniquely identifying your users page, you can alter how directory
synchronization behaves in multiple-forest environments.

14. On the Filter users and devices page, you can use synchronization filtering based on Active
Directory group membership.

15. On the Optional Feature page, review the following options, and then click Next:

o Exchange hybrid deployment. Enable this option in Microsoft Exchange coexistence scenarios.
o Exchange Mail Public Folders. Enable this option to synchronize a subset of attributes of mail-
enabled public folders.
o Azure AD app and attribute filtering. Enable this option to filter the attributes that will
synchronize in Azure AD based on the cloud applications that you intend to use.

o Password synchronization. Enable this option if you selected pass-through authentication. You
might consider enabling this option if you selected federation as the SSO solution.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 10-23

o Password writeback. Enable this option to synchronize changes to account passwords in the
cloud back to AD DS.
o Group writeback. Enable this option if you have Office 365 groups that you want to replicate to
on-premises Active Directory as distribution groups.

o Device writeback. Enable this option to replicate devices registered in or joined to Azure AD to
AD DS. This is useful when implementing conditional access scenarios.

o Directory extension attribute sync. Enable this option to synchronize AD DS custom attributes
to Azure AD.
16. If you enabled the Azure AD app and attribute filtering option in the previous step, on the Azure
AD Apps page, you can restrict attributes to synchronize based on your choice of Azure AD apps,
such as Exchange Online or Microsoft SharePoint Online.

17. The Azure AD attributes page also appears only if you selected the Azure AD app and attribute
filtering option on the Optional Feature page. On the Azure AD attributes page, you can select
individual AD DS attributes that you want to synchronize with Azure AD.

18. If you selected Directory extension attribute sync, then on the Directory extension page, you can
extend the schema in Azure AD with custom attributes that exist in your AD DS.
19. On the Ready to configure page, click Install to complete the custom installation of Azure AD
Connect.

Note: A later topic, “Deploying AD FS,” discusses custom AD FS installation.

Note: At the time of authoring, you can install Azure AD Connect only by using the setup
wizard. Unattended or silent installation is not supported.

Configuring filtering options


You can customize the scope of synchronization by configuring filtering based on:

• Domains. You might have a domain with objects that you do not want to synchronize with AD DS.
• OUs. This is a popular filtering option. You use it to select objects from specific OUs that will
synchronize with Azure AD.

• Attributes. Attribute-based filtering provides a much more granular level of control. By using this type
of filtering, you can specify individual objects from on-premises AD DS that should or should not
synchronize with Azure AD.

You can configure filtering for domains and OUs in on-premises AD DS by rerunning Azure AD Connect.
To do this, use the following procedure:
1. Start Azure AD Connect.

2. On the Welcome to Azure AD Connect page, click Configure.

3. On the Additional tasks page, select Customize synchronization options, and then click Next.

4. When prompted, provide the credentials for an Azure AD Global Administrator and an AD DS
Enterprise Admin. You will be able to modify domain and OU filtering settings from the Domain
/OU Filtering page.
MCT USE ONLY. STUDENT USE PROHIBITED
10-24 Managing Active Directory infrastructure in hybrid and cloud only scenarios

To configure attribute-based filtering in on-premises AD DS, perform the following steps:

1. Start Synchronization Rules Editor.

2. Modify inbound or outbound synchronization rules.

Synchronizing directories
After you define filtering for the objects that you plan to synchronize with Azure AD, you can configure
scheduled or manual synchronization. You can perform manual synchronization from the Synchronization
Service Manager or by using Windows PowerShell. In the Synchronization Service Manager, you can
manage Run Profiles that define the process of synchronization. You can configure the following Run
Profiles:

• Full Import

• Full Synchronization

• Delta Import

• Delta Synchronization

• Export
To synchronize objects from AD DS, you need to run the appropriate profile from the Synchronization
Service Manager. Alternatively, for manual synchronization, you can use the Azure AD Connect PowerShell
cmdlet Start-ADSyncSyncCycle.

Note: Microsoft does not support renaming a server that is hosting an installation of
Azure AD Connect.

Managing and monitoring directory synchronization


Directory synchronization relies on built-in
schedulers to carry out synchronization tasks.
Azure AD Connect includes two schedulers:

• A scheduler that handles password sync.


• A scheduler that handles object and attribute
sync.

The password sync scheduler initiates


synchronization in response to password change
and reset events, so typically there is no reason
to customize its behavior. The object and
attribute sync scheduler performs
synchronization every 30 minutes. You can modify its frequency by using the Set-ADSyncScheduler
Windows PowerShell cmdlet. You can also modify the type of synchronization. For example, you can set
the next synchronization run to perform a full import and sync rather than a delta-based synchronization
only. The Start-ADSyncCycle cmdlet allows you to run individual synchronization tasks outside of the
scheduled sync cycle. For example, to initiate a delta synchronization, you can run Start-ADSyncCycle
–PolicyType Delta.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 10-25

You can use Azure AD Connect Health to monitor your on-premises identity infrastructure and the
synchronization services that Azure AD Connect provides. Azure AD Connect Health requires the Azure
AD Premium P1 or P2 edition. It uses agents running on the servers hosting directory synchronization
infrastructure. This infrastructure includes the server where you installed Azure AD Connect. If you
implemented a federation between AD DS and Azure AD, it also includes the servers hosting AD FS and
either AD FS proxy servers or Web Application Proxy components. The agents collect information about
synchronization events, configuration settings, and performance of the synchronization operations, and
then send the collected information to the Azure AD Connect Health service.

Azure AD Connect Health consists of three services:

• Azure AD Connect Health for Sync

• Azure AD Connect Health for AD DS Sync

• Azure AD Connect Health for AD FS

Azure AD Connect Health for Sync monitors and provides information on the synchronizations that occur
between your on-premises AD DS and Azure AD. Azure AD Connect Health for Sync includes the
following key capabilities:

• Customizable alerts in response to changes in the synchronization status. For critical alerts, you can
subscribe to email notifications. Every alert contains suggested resolution steps, links to
documentation describing potential causes, and a history of previously resolved, matching alerts.

• Sync insight. The available information includes latency of sync operations and object change trends.
Information about synchronization latency originates from the Azure AD Connect server. The
synchronization objects change trend provides a graphical representation of the number of successful
and failed synchronizations.
Azure AD Connect Health for AD DS Sync monitors and provides information on the status of domain
controllers that form the AD DS infrastructure. Azure AD Connect Health for AD DS includes the following
key capabilities:
• Customizable alerts in response to changes in the Active Directory and domain controller status. You
can subscribe to email alert notifications that include suggested resolution steps, links to
documentation describing potential causes, and a history of previously resolved, matching alerts.
• Customizable dashboards, including the Domain Controller and Replication Status dashboards.
Dashboards offer a convenient method of viewing the most important operational parameters of
your domain controllers.

Note: The next lesson of this module describes Azure AD Connect Health for AD FS.

To start Azure AD Connect Health, perform the following steps:

1. Sign in to the Azure portal.


2. Locate Azure AD Connect Health by searching for it in the Azure Marketplace or by selecting
Marketplace, and then selecting Security + Identity.

3. On the introductory blade, click Create. This opens another blade with your directory information.

4. On the directory blade, click Create.


MCT USE ONLY. STUDENT USE PROHIBITED
10-26 Managing Active Directory infrastructure in hybrid and cloud only scenarios

Implementing Azure AD DS
Azure AD DS is a Microsoft-managed AD DS
service. The service consists of two Active
Directory domain controllers in a new, single-
domain forest. When you provision the service,
the Azure platform automatically deploys these
two domain controllers to an Azure virtual
network that you designate. In addition, the
managed AD DS automatically synchronizes its
users and groups from the Azure AD tenant
associated with the Azure subscription hosting
the virtual network. Effectively, the Azure AD DS
domain will contain the same users and groups
as its Azure AD counterpart. This provides the following capabilities:

• You can join Azure VMs to the managed AD DS domain if they reside on the same virtual network or
another virtual network connected to it.

• Azure AD users can use their existing credentials to sign in to these Azure VMs.

If you have an on-premises AD DS domain that synchronizes with the same Azure AD tenant, your on-
premises AD DS users can sign in to the Azure AD DS domain by using their existing credentials.

However, in this scenario, the on-premises Active Directory domain is separate from the Active Directory
domain that Azure AD DS implements. The two Active Directory domains have different domain names
and separate sets of user, group, and computer objects, although the user and group objects within the
scope of Azure AD Connect synchronization have matching attributes.

Azure AD DS offers support for the same set of protocols as on-premises AD DS. With Azure AD DS, you
can migrate applications that depend on AD DS to Azure VMs without having to deploy and maintain
additional domain controllers or establish connectivity with the on-premises infrastructure.

Note: There are some additional, important differences between AD DS and Azure AD DS.
For example, Azure AD DS does not allow you to create trust relationships or extend the schema.
Its user and group objects are read-only. It also offers relatively limited support for Group Policy,
with only two built-in Group Policy Objects—one containing computer settings and another
containing user settings. In addition, while it is possible to perform LDAP binds and LDAP reads
against Azure AD DS, there is no support for LDAP writes.

Additional Reading: For more information about Azure AD DS, refer to: “Azure AD
Domain Services” at: https://aka.ms/kzye0f
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 10-27

Demonstration: Implementing directory synchronization by using


Azure AD Connect
In this demonstration, you will learn how to:

• Create an Azure AD tenant.

• Create an Azure AD Global Admin user account.

• Install Azure AD Connect with custom settings.

Question: Is there a way to install Azure AD Connect unattended?

Question: Can you rename a server after you install Azure AD Connect on it?
MCT USE ONLY. STUDENT USE PROHIBITED
10-28 Managing Active Directory infrastructure in hybrid and cloud only scenarios

Lesson 3
Implementing single sign-on in federated scenarios
AD FS supports federating AD DS with Azure AD. This allows organizations to benefit from federation
features while accessing cloud resources and maintaining full control of organizational identities in their
on-premises environments.

Lesson Objectives
After completing this lesson, you will be able to:

• Explain how AD FS and the Web Application Proxy roles interoperate.

• Prepare for deploying AD FS by using Azure VMs.


• Implement AD FS by using Azure AD Connect.

• Perform basic AD FS management and maintenance tasks.

Overview of AD FS and Web Application Proxy


You can use the Active Directory Federation
Services server role of Windows Server to provide
a federation-based single sign-on experience to
on-premises users across various cloud-based
platforms. After authenticating with their AD DS
credentials, users can access Azure-based
resources, Microsoft online services (such as
Exchange Online or SharePoint Online) that rely
on Azure AD authentication, and Software as a
Service (SaaS) applications integrated with
Azure AD.

This functionality requires directory


synchronization between the on-premises Active Directory environment and the corresponding Azure AD
tenant, as did other Azure AD integration methods described earlier in this module. However, you must
also deploy a security token service (STS) infrastructure, such as AD FS servers. Because such servers need
be able to communicate directly with the AD DS infrastructure, they must reside on the same internal
network as AD DS domain controllers. To provide the ability to authenticate via AD FS from the internet,
you must also deploy additional servers in your perimeter network that function as proxies of the AD FS
servers. With Windows Server 2012 R2 and Windows Server 2016, you can use servers running the Web
Application Proxy role service for this purpose.

AD FS provides the infrastructure that enables users to authenticate in one environment and use that
authentication to obtain access to resources in another. AD FS works seamlessly with AD DS to relay
tokens that contain information about users in on-premises AD to resource providers.

If a user initiates an authentication request through AD FS by using an AD FS–aware client (such as most
internet browsers), AD FS first forwards the request to AD DS. If the Active Directory authentication is
successful, the STS component of the AD FS server issues an appropriately formed security token. This
token serves as the authentication proof to the service that the user attempts to access, such as Azure,
Office 365, or a non-Microsoft cloud or application provider.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 10-29

How AD FS works with Azure AD


The steps listed below describe the process of signing in to a browser-based SaaS application integrated
with Azure AD when using AD FS:

1. The user opens a web browser and sends an HTTPS request to the SaaS application.

2. The SaaS application determines if the user belongs to an Azure AD tenant.

3. The SaaS application provider then redirects the user to the user’s Azure AD tenant.

4. The user’s browser sends an HTTPS authentication request to the Azure AD tenant.
5. If the user’s Azure AD account represents a federated identity, the user’s browser is redirected to the
on-premises federation server.

6. The user’s browser sends an HTTPS request to the on-premises federation server.

7. If the user has signed in to the on-premises AD DS domain, the federation server requests the AD DS
authentication, based on the user’s existing Kerberos ticket. Otherwise, the user receives a prompt to
authenticate with the AD DS credentials, which the federation server relays to an AD DS domain
controller.
8. The AD DS domain controller verifies the authentication request and then sends the successful
authentication message back to the federation server.
9. The federation server creates the claim for the user based on the rules defined as part of the AD FS
configuration.

10. The federation server places the claims data in a digitally signed security token and forwards it to the
user’s browser.

11. The user’s browser forwards the security token containing claims to Azure AD.

12. Azure AD verifies the validity of the AD FS security token based on the existing federation trust.

13. Azure AD creates a new token to access the SaaS application and sends it back to the user’s browser.
14. The user uses the Azure AD–issued token to access the SaaS application.

AD FS supports authentication based on Web Services Federation (WS-Federation), Web Services Trust
(WS-Trust), Security Assertion Markup Language (SAML), OpenID Connect, and OAuth 2.0. AD FS supports
more advanced identity management solutions, such as account provisioning and deactivation, or
credential mapping.

Authentication occurs through one of several methods. AD FS supports:

• Forms authentication, which is the default for internet-based access.

• Certificate authentication, such as smart card or client certificates.


• Windows authentication, which is the default for intranet-based requests. Forms authentication serves
as the fallback, because many browsers might not support Windows authentication.

• Device authentication, which leverages the device registration capabilities.

• Azure Multi-Factor Authentication, which allows you to take advantage of the multi-factor
authentication capabilities of Azure AD.
MCT USE ONLY. STUDENT USE PROHIBITED
10-30 Managing Active Directory infrastructure in hybrid and cloud only scenarios

In the AD FS architecture, the AD FS servers communicate directly with AD DS. Because of this direct
connectivity, AD FS servers need the same levels of protection as domain controllers. To facilitate external
authentication requests crossing public internet, AD FS supports a separate set of servers serving as
proxies. AD FS proxy servers typically reside in a perimeter network, accept authentication requests, and
then forward them to AD FS servers through port 443 (SSL). This is the only port that needs to be open
between the proxy and the AD FS servers.

There have been several versions of AD FS since the initial release, including:

• AD FS 1.0, originally released as a Windows component with Windows Server 2003 R2.

• AD FS 1.1, released with Windows Server 2008 and Windows Server 2008 R2 as an installable server
role.

• AD FS 2.0, released as an installable download for Windows Server 2008 Service Pack 2 and later.
• AD FS 2.1, released with Windows Server 2012 as an installable server role.

• AD FS 3.0, is an installable server role on Windows Server 2012 R2. AD FS 3.0 does not require a
separate Internet Information Services (IIS) installation, and it includes a new AD FS proxy role called
Web Application Proxy.

• AD FS 4.0, is an installable server role on Windows Server 2016. Similar to its predecessor, AD FS 4.0
does not require a separate IIS installation and includes the Web Application Proxy role.

AD FS on Windows Server 2012 R2 and Windows Server 2016


In Windows Server 2012 R2, AD FS includes a federation service role service that acts as an identity
provider or federation provider. It supports device Workplace Join for SSO and seamless multi-factor
authentication. Devices register in AD DS through a Device Registration Service (DRS) and use an X509
certificate bound to the user context on that machine for device authentication. In a default configuration,
users sign in through AD FS to initiate the registration process by using their Active Directory credentials.

AD FS can provide conditional access control based on user attributes, such as UPN, email or security
group membership, device attributes such as Workplace Join, and request attributes such as network
location or IP address.

Since the introduction of Windows Server 2012 R2, device registration capabilities have transitioned to
Azure AD. Azure AD has become a full-fledged authentication provider for on-premises environments. AD
FS included in Windows Server 2016 takes full advantage of this trend. This allows you to use AD FS to
implement functionality such as:

• Sign-in with Azure Multi-Factor Authentication. This allows you to configure Azure Multi-Factor
Authentication as either the primary or an additional authentication method. It is important to note
that the Multi-Factor Authentication adapter included in AD FS in Windows Server 2016 can protect
access to on-premises resources without an on-premises Azure Multi-Factor Authentication server.
This was necessary in Windows Server 2012 R2.

Additional Reading: For more information regarding configuring AD FS 2016 and Azure
Multi-Factor Authentication, refer to: “Configure AD FS 2016 and Azure MFA” at:
https://aka.ms/azrhs6

• Password-less access from compliant devices. This feature leverages Azure AD device registration to
configure policies that enforce conditional access to on-premises resources based on the status of a
user’s devices.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 10-31

• Support for sign-in with Windows Hello for Business. AD FS allows users with Windows 10 devices to
sign in to AD FS–protected applications on-premises or from the internet by relying on Windows
Hello, without requiring a password.

Planning for the deployment of AD FS with Azure


When planning for AD FS, you should consider a
range of factors, including:

• Server placement

• Network connectivity

• DNS name resolution

• Certificates
• Capacity

• Availability

• Database platform
• Service accounts

• Conditional access

• End-user devices and browsers

Server placement
The most critical components of an AD FS deployment are federation servers, typically operating as a
server farm. Therefore, it is important to consider the proper server placement. AD FS servers must be
domain members, and you should place them behind a firewall on the organization’s internal network.
AD FS proxies typically are not domain members, and they should reside in a perimeter network.

Network connectivity
Firewall configuration is relatively simple because external clients only need the TCP 443 port to connect
to the AD FS Proxy or the Web Application Proxy endpoint. The proxy then communicates with AD FS by
using the same port TCP 443.

Note: When implementing certificate authentication, you must also allow connectivity on
TCP port 49433 between external clients and the proxy servers.

DNS name resolution


DNS domain names must match between on-premises Active Directory and Azure AD. This means that
the Active Directory UPN suffix is the same as a registered domain name in Azure.
In addition, you must ensure that all client requests targeting AD FS resolve to the intended access point
of the AD FS service, depending on whether the clients are on the internal network or on the internet.
Internal clients should connect directly to the AD FS server and external clients should connect to the
federation proxy (AD FS Proxy or Web Application Proxy). This means that the same DNS name should
resolve to different IP addresses depending on the origin of the name resolution query. To achieve this,
you can implement an internal and an external DNS zone and create AD FS–specific records in each.
MCT USE ONLY. STUDENT USE PROHIBITED
10-32 Managing Active Directory infrastructure in hybrid and cloud only scenarios

For example, if the host name to connect to your AD FS infrastructure is adfs.contoso.com, you would
create the DNS records displayed below.

Internal DNS
Contoso.com internal DNS zone would contain the following record.

Host name Address

Adfs 192.168.0.12

Where 192.168.10.12 is the IP address of the AD FS server farm.

External DNS
Contoso.com external DNS zone would contain the following record.

Host name Address

Adfs 131.107.21.65

Where 131.107.21.65 is the public IP address of the proxy array.

Additional Reading: Starting with Windows Server 2016, you can implement this
functionality on a single DNS server by using Windows DNS Server policies. For more information
regarding this feature, refer to: “Split-Brain DNS Deployment Using Windows DNS Server Policies”
at: https://aka.ms/lxcb1o

Certificates
AD FS uses certificates for two main purposes:

• Token encryption/decryption
• Token signing

• SSL encryption
For token encryption/decryption and token signing, AD FS uses self-signed certificates by default. AD FS
automatically renews self-signed certificates. You can use certificates issued by a public or by your internal
certification authority. In either case, you must update relying parties with the new certificates.
For SSL encryption, certificates must come from a public CA that AD FS clients trust. All AD FS servers
must use the same SSL certificate. The AD FS configuration, including the SSL certificate thumbprint,
replicates through a Windows Internal Database (WID) or is shared across SQL Server databases on all the
members of the AD FS server farm. AD FS proxies do not have to use the same certificate as internal AD FS
servers, because AD FS proxies do not share configuration information. Each AD FS proxy server can use a
different SSL certificate if the common name (CN) on each certificate matches the service name of the
internal AD FS servers. However, it is possible for all AD FS servers and AD FS proxy servers to use the
same certificates.

Capacity
When determining how many AD FS servers to deploy in an organization, you should consider several
factors. These factors include the number of users issuing authentication requests, the server hardware,
and the type of AD FS configuration database. To determine the optimal configuration, use the AD FS
Capacity Planning Spreadsheet for Windows Server 2016 available at https://aka.ms/lh5a7w. You will find
the Windows Server 2012 R2 version of the spreadsheet at https://aka.ms/owotvp.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 10-33

Availability
You can implement AD FS as a standalone server or as a server farm. We recommend always using an
AD FS server farm, even if the farm consists initially of just one server. This provides the option to add
more AD FS servers later for load balancing and fault tolerance. If you deploy AD FS as a standalone
federation server, you cannot add more servers later.
To provide high availability, AD FS servers typically operate as a server farm, with client requests load-
balanced via software or hardware load balancers. A load balancer exposes a single IP address for the
load-balancing array that you must then associate with the DNS name representing your AD FS endpoint.
You also must include the same name as the CN or subject alternative name of the SSL certificate. You
should also implement the federation proxy servers as a multi-instance, load-balanced service. Note that
Azure AD Connect does not automate the configuration of load balancers.

Database servers
AD FS servers require a database, which you can implement as either a WID or a SQL Server instance. If
you use WID, then one of the AD FS servers in a farm functions as the primary and the remaining servers
function as secondaries. The primary federation server is initially the first federation server in the farm, and
it has a read/write copy of the AD FS configuration database. All other federation servers in the farm (the
secondary servers) regularly poll the primary server and synchronize any changes to a locally stored, read-
only copy of the AD FS configuration database. By default, the poll interval is five minutes. You can
change its value by using the Set-AdfsSyncProperties Windows PowerShell cmdlet. To force immediate
synchronization, restart the AD FS service.

Secondary servers provide fault tolerance for the primary server, and with appropriate server placement,
they can load-balance access requests across network sites. If the primary federation server is offline, all
secondary federation servers continue to process requests as normal. However, you cannot make changes
to the AD FS database until the primary federation server is back online or until you promote a secondary
server to the primary role. You can use the Set-AdfsSyncProperties Windows PowerShell cmdlet to
manage primary and secondary role assignment. If SQL Server stores AD FS information, all servers in the
farm are considered primaries because they all have read/write access to the database.

Service accounts
You must provide an account that will provide a security context for the AD FS service. You can use a
domain user account or a Group Managed Service Account (gMSA) for this purpose. The latter requires
domain controllers that run Windows Server 2012 or newer. The advantage of a gMSA is that its password
changes are automatic, which lowers management overhead and increases security.

Conditional access
You might want to implement conditional access. For example, you might prevent users residing in a
location from successfully authenticating or force them to provide a second form of authentication.

End-user devices and browsers


Any current web browser with JavaScript enabled can work as an AD FS client. However, Microsoft has
only tested Internet Explorer, Microsoft Edge, Mozilla Firefox, and Safari on Mac.

To allow single sign-on, users must enable cookies when interacting with federation servers and their
proxies. Cookies prevent repeating prompts for sign-ins within the same session. The authentication
cookie is signed but not encrypted. AD FS provides encryption by ensuring SSL-based communication.
MCT USE ONLY. STUDENT USE PROHIBITED
10-34 Managing Active Directory infrastructure in hybrid and cloud only scenarios

Deploying AD FS
Azure AD Connect simplifies the AD FS
installation process. After you ensure that your
environment meets the prerequisites described in
the previous topic, the process of installing
Windows roles and features and their
dependencies is automated. For deploying AD FS
by using Azure AD Connect, you need:

• One or more computer running Windows


Server 2012 R2 or newer to host AD FS.

• One or more computers running Windows


Server 2012 R2 or newer to host Web
Application Proxy.

• An SSL certificate for the federation service name that you intend to use.

To install AD FS by using Azure AD Connect, perform the following steps:

1. Start Azure AD Connect setup.


2. On the Express Settings page, click Customize.

3. On the Install Required Components page, click Install.

4. On the User Sign-in page, select Federation with AD FS, and then click Next.

5. On the Connect to Azure AD page, type the credentials for the account that has the Global
Administrator role in the Azure AD tenant with which you want to establish federation.

6. On the Connect your directories page, specify the Active Directory forest and click Add Directory.
When prompted, in the AD Forest Account window, ensure that the Use existing account option is
selected, and then type a pre-created account you intend to use for synchronization. Alternatively,
you can select the Create new account option, provide the credentials of an AD DS Enterprise
Administrator account, and then click OK. This will create the synchronization account for you.

7. On the Connect your directories page, click Next.

8. On the Azure AD sign-in configuration page, verify that your Azure AD tenant has an existing
custom domain. The name of the custom domain should match the Active Directory UPN suffix. Note
that, at this point, your custom domain does not have to be verified yet. If a custom domain does not
exist, create a new custom domain and refresh the page.
9. On the same page, you will also find the option to specify which on-premises attribute you want to
use as the Azure AD user name. By default, Azure AD Connect uses userPrincipalName for this
purpose.
10. Click Next.

11. On the Domain and OU filtering page, specify any custom filtering settings.

12. On the Uniquely identifying your users page, select the criteria to use when merging multiple
accounts of individual users that exist in more than a single AD DS forest.

13. On the Filter users and devices page, specify whether to limit the scope of synchronization to
members of an individual group.

14. On the Optional Features page, select additional synchronization features that you intend to
implement.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 10-35

15. On the AD DS Farm page, select Configure a new AD FS farm. Browse to and select the certificate
for SSL, and then provide the password for the certificate. Alternatively, you can use a certificate
installed on servers that will host the AD FS server role. After you provide the certificate, you will also
need to select the certificate subject name and prefix that you intend to use for the federation
servers.

16. Click Next.

17. On the AD FS servers page, add one or more domain-joined servers that will host AD FS by
specifying their names or IP addresses.

18. On the Web Application Proxy servers page, add one or more servers that will provide the AD FS
proxy functionality. When prompted, type the credentials for the user account that has local
administrator privileges on the proxy servers. That account will establish connectivity with Web
Application Proxy.

Note: All servers must be accessible from the Azure AD Connect server via Windows
Remote Management (WinRM).

19. On the Domain Administrator credentials page, specify the user name and the password of a
domain administrator account.
20. On the AD FS service account page, create a new gMSA, specify an existing one, or provide an
existing domain user account.

21. On the Azure AD Domain page, select the Azure AD domain that you want to federate with AD DS.
If the domain is not verified, you will have to perform verification at this point before proceeding.

22. On the Ready to configure page, review the installation steps, ensure that the Start the
synchronization process as soon as the configuration completes check box is selected, and then
click Install.

23. After the installation completes, you can verify AD FS functionality by clicking Verify.

Managing and maintaining AD FS


After you deploy AD FS, you will occasionally
need to perform various management and
maintenance tasks.

Managing the certificate life cycle


As mentioned earlier, the self-signed certificates
that AD FS generates for signing purposes
support automatic rollover, resulting in
automatic renewal once per year. You must
manage renewal of all other certificates that an
internal or external CA issued, and use them to
replace the existing ones. You can view certificate
expiration dates for the service communication,
token-encrypting, token-decrypting, and token-signing certificates by using the AD FS Management
console. In the console tree, expand Service, and then click Certificates. You can also use the
Get-ADFSCertificate Azure AD PowerShell cmdlet to view certificate details.
MCT USE ONLY. STUDENT USE PROHIBITED
10-36 Managing Active Directory infrastructure in hybrid and cloud only scenarios

Converting domains to federated


Configuring an on-premises Active Directory forest as federated creates a relying party trust between
Azure AD and the on-premises AD DS domain. After conversion, synchronized on-premises users become
federated users, making it possible for them to use their organizational credentials to authenticate when
accessing Azure resources. If you have an additional existing Active Directory domain that you need to
implement as a federated domain, you can run Azure AD Connect again.

Monitoring AD FS with Azure AD Connect Health


You can monitor AD FS functionality by relying on Azure AD Connect Health for AD FS, which uses agents
that reside on AD FS servers and proxy servers. These agents collect events, configuration settings, and
performance metrics, and forward them to the cloud-based Azure AD Connect Health service. The Azure
portal displays the collected data, presenting it in the following autogenerated views:
• The Alerts view shows information about active alerts that represent events, configuration
information, and synchronization status of AD FS. For critical alerts, you can subscribe to email
notifications. Every alert contains resolution steps, links to additional documentation, and a history of
the previously resolved alerts.

• The Usage Analytics view shows information about successful logins, the authentication method,
and the number of users who are accessing AD FS–protected applications. You can also generate
audit reports from AD FS servers.

• The Monitoring view shows a summary of performance counters that are collected from AD FS
servers, such as CPU utilization, memory, and latency.

To install the Azure AD Connect Health agent on the AD FS server, perform the following steps:

1. Sign in to the Azure portal with the global administrative account.

2. In the Azure Marketplace, locate the Azure AD Connect Health extension.


3. Download the Azure AD Connect Health agent for AD FS.

4. Double-click the .exe file that you downloaded.

5. Click Install, and then follow the installation procedure.


6. After the installation completes, click Configure Now.

7. This opens Windows PowerShell with elevated privileges. Run the Register-
AzureADConnectHealthADFSAgent cmdlet.

8. Sign in to Azure to complete the agent configuration.

Question: What are AD FS deployment options that provide resiliency and scalability?
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 10-37

Lab: Implementing and managing Azure AD


synchronization
Scenario
Adatum Corporation users access on-premises applications by authenticating once, during initial sign-in
to their client computers. While evaluating Azure for Adatum, you must verify that Adatum users can
continue using their existing credentials to access Azure resources. In addition, you must verify that
attribute changes to Active Directory user and group accounts will automatically replicate to Azure AD.

Objectives
After completing this lab, you will be able to:

• Configure directory synchronization.

• Synchronize on-premises Active Directory with Azure Active Directory.

Note: The lab steps for this course change frequently due to updates to Microsoft Azure.
Microsoft Learning updates the lab steps frequently, so they are not available in this manual. Your
instructor will provide you with the lab documentation.

Lab Setup
Estimated Time: 60 minutes
Virtual machine: 20533E-MIA-CL1

User name: Student

Password: Pa55w.rd
Before starting this lab, ensure that you have performed the “Preparing the lab environment”
demonstration tasks at the beginning of the first lesson in this module, and that the setup script has
completed.

Exercise 1: Configuring directory synchronization


Scenario
Adatum plans to integrate its AD DS with Azure AD. To test this plan, you need to deploy and configure
Azure AD Connect to synchronize your test Active Directory environment with a test Azure AD tenant. To
eliminate the need to verify a custom DNS domain, you will be using the default DNS name of the test
Azure AD domain.

Exercise 2: Synchronizing directories


Scenario
Adatum wants to test Azure AD synchronization by changing a few attributes of a synchronized user
account and then performing manual synchronization.

Question: How would you implement OU–level filtering for directory synchronization?

Question: When would you use Azure AD Connect custom setup?


MCT USE ONLY. STUDENT USE PROHIBITED
10-38 Managing Active Directory infrastructure in hybrid and cloud only scenarios

Module Review and Takeaways


Common Issues and Troubleshooting Tips
Common Issue Troubleshooting Tip

Typical problems that affect Azure AD


Connect–based synchronization include:
• Connectivity issues
• Errors during synchronization
• Password synchronization issues

Review Question

Question: What Azure AD integration option would you consider to be optimal in your
environment?
MCT USE ONLY. STUDENT USE PROHIBITED
11-1

Module 11
Using Microsoft Azure-based management, monitoring,
and automation
Contents:
Module Overview 11-1

Lesson 1: Using Azure-based monitoring and management solutions 11-2

Lesson 2: Implementing Automation 11-17


Lesson 3: Implementing Automation runbooks 11-22

Lesson 4: Implementing Automation–based management 11-29

Lab: Implementing Automation 11-33

Module Review and Takeaways 11-34

Module Overview
As described in previous modules, you can configure Microsoft Azure–based monitoring on a per-
resource basis. You can also use resource-specific management capabilities for resource provisioning,
maintenance, and deprovisioning. However, Azure helps you optimize these tasks by providing extensive
monitoring and management capabilities based on services such as Azure Monitor, Azure Advisor, Azure
Security Center, Azure Log Analytics, and Automation. Some of them, including Log Analytics and
Automation, allow you to extend the scope of monitoring and management to your on-premises
environment. In addition, you can further strengthen the security and manageability of your Azure
environment by leveraging the functionality of Azure Resource Manager, such as Role-Based Access
Control (RBAC), policies, initiatives, and locks. In this module, you will learn about these technologies and
their implementation.

Objectives
After completing this module, you will be able to:
• Use Azure-based management and monitoring services.

• Implement the core components of Automation.

• Implement different types of Automation runbooks.


• Implement Automation-based management.
MCT USE ONLY. STUDENT USE PROHIBITED
11-2 Using Microsoft Azure-based management, monitoring, and automation

Lesson 1
Using Azure-based monitoring and management solutions
Azure offers several services that provide comprehensive monitoring and management functionality.
While most of them target Azure-based environments, some also support on-premises resources, allowing
customers to implement a consistent support model in hybrid scenarios. Customers can benefit from
these services in business scenarios such as tracking, auditing, or troubleshooting past events, optimizing
administration of their existing deployments, and forecasting and capacity planning of future
deployments.

This lesson describes several services that deliver these benefits, including Log Analytics, Security Center,
and Monitor. It also explores governance aspects of Azure Resource Manager, which Module 1,
“Introduction to Microsoft Azure” described briefly.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe the primary characteristics and architectural components of Log Analytics.

• Explain the role that Log Analytics serves in monitoring and managing Azure resources.

• Implement Log Analytics.

• Use Security Center to identify and remediate security related threats.

• Set up comprehensive resource monitoring by using Monitor.

• Enforce governance by taking advantage of Azure Resource Manager functionality.

Demonstration: Preparing the lab environment


Perform the tasks in this demonstration to prepare the lab environment. The environment will be
configured while you progress through this module, learning about the Azure services that you will use in
the lab.

Important: The scripts used in this course might delete objects that you have in your
subscription. Therefore, you should complete this course by using a new Azure subscription. You
should also use a new Microsoft account that has not been associated with any other Azure
subscription. This will eliminate the possibility of any potential confusion when running setup
scripts.

This course relies on custom Azure PowerShell modules, including Add-20533EEnvironment to prepare
the lab environment, and Remove-20533EEnvironment to perform clean-up tasks at the end of the
module.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 11-3

Introducing Log Analytics


Log Analytics extends the functionality
implemented in Microsoft Operations
Management Suite, which, in turn, superseded
Azure Operational Insights and Microsoft System
Center Advisor. Knowing the lineage of the service
might help you understand references to
Operations Management Suite that you will
encounter throughout this module and in the
product documentation. The core functionality of
Log Analytics includes log collection, analyzing log
content, and extensive search capabilities. Log
Analytics also offers a range of monitoring and
management features by integrating with Azure services such as Automation, Backup, and Recovery
Services. You can implement this integration by using additional management solutions that are part of
the Log Analytics offering.

Architecture
From the architectural standpoint, Log Analytics operates as a web service, which interacts with a number
of distinct components that facilitate data collection, analysis, and visualization. The Log Analytics
architecture consists of the following components:

• Connected data sources represent monitored systems, which belong to one of the following
categories:
o Windows or Linux server or Windows client operating system running the Microsoft Monitoring
Agent connected to the Log Analytics service (the agent is available for Windows 32-bit and
64-bit systems, in addition to Linux).

Note: These systems can reside on-premises, in Azure, or in datacenters that other cloud
providers manage.

o System Center Operations Manager management groups, including all systems that are part of
these groups. Considering that Operations Manager is supported on-premises and in Azure, the
integration with Log Analytics is available in each of these scenarios.

o Azure Storage accounts used by Azure VMs configured with the Windows Azure Diagnostic VM
extension or the Linux Azure Diagnostic VM extension, or by Cloud Service worker and web roles
with the Windows Diagnostic VM extension.

o Azure Activity Log that the Azure platform uses to record all write operations. These operations
include PUT, POST, and DELETE actions targeting any of the Azure Resource Manager resources
in your subscriptions.

o Azure service diagnostics, logs, and metrics generated by a wide range of Azure services.
Depending on the service type, you might be able to:
 Write service diagnostics directly to Log Analytics.
 Collect diagnostics data that services store in an Azure Storage account.
 Rely on Log Analytics connectors to provide a communication path between services and
Log Analytics.
MCT USE ONLY. STUDENT USE PROHIBITED
11-4 Using Microsoft Azure-based management, monitoring, and automation

 Use Azure PowerShell scripts or Automation runbooks to collect and post service data to Log
Analytics.
 Transfer any service-specific metrics and logs that Monitor already collects.
 Implement HTTP Data Collector application programming interface (API)–based code to
write logs into Log Analytics.
• Log Analytics Repository designates Azure-based storage for data that Log Analytics collects from
connected sources.

• Operations Management Suite workspace represents the administrative and security boundary of the
Log Analytics environment. It also defines the scope of data collection, analysis, and visualization.
Each workspace has a unique Workspace ID and is associated with the primary and secondary keys
that serve as its authentication mechanism. Knowledge of these parameters (the ID and at least one
of the two keys) is necessary to join a system to the workspace (this is equivalent to the way of
controlling access to an Azure Storage account). You can create multiple workspaces in the same
Azure subscription.

• Log Analytics management solutions build on the core functionality of the service by implementing
log processing and analytics rules. These rules derive meaningful information from raw data collected
from connected data sources. Some of the Log Analytics solutions also extend the scope of collected
data. All currently available solutions appear in the Azure Marketplace. You can browse through this
list and add them directly to your workspace.

• Log Analytics graphical interface in the Azure portal offers a convenient approach to configuring
various aspects of your workspace, including data collection, solution management, and custom
analytics.

Note: If data sources do not have direct connectivity to Azure, you must implement
Operations Management Suite Gateway. Operations Management Suite Gateway serves as a
proxy and relays collected data to the Log Analytics Repository.

Management solutions
Log Analytics management solutions constitute the primary method of extending the core capabilities of
the service. To leverage this extensibility, you must simply add to the workspace any solution that is
available in the Marketplace. However, you should keep in mind that adding solutions impacts the volume
of the collected data, which has implications for network bandwidth utilization and pricing.

Some of the most commonly used solutions include:

• Active Directory Health Check. Assesses health and vulnerabilities of your Active Directory Domain
Services (AD DS) deployment.

• Change Tracking. Keeps track of changes to your managed environment.

• Key Vault Analytics. Monitors Azure Key Vault logs, allowing you to determine its usage patterns.
• Antimalware Assessment. Checks the status of antivirus and antimalware scans on monitored systems.

• Networking Performance Monitor. Performs near real-time network monitoring to help you evaluate
network performance and reliability.

• Security and Audit. Collects security-related events so that you can respond promptly or even prevent
any security exploits.

• Service Fabric Analytics. Monitors operational status of Service Fabric clusters.

• Service Map. Discovers dependencies between software components across all monitored servers.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 11-5

• Azure SQL Analytics. Assesses performance, health, and vulnerabilities of your Microsoft SQL Database
deployments.

• Update Management. Identifies missing system updates on monitored systems running Windows or
Linux.

• Wire Data 2.0. Gives you the ability to analyze network traffic based on collected metadata.

• Automation. Integrates with Automation and delivers its status and statistical data. Simplifies
management by providing links to automation-related features in the Azure portal.

Microsoft combined some of these solutions to align them with related Azure services, providing four
functional groups of solutions available from the Marketplace:

• Insight & Analytics. Includes Network Performance Monitor, Service Map, and Wire Data 2.0.
• Automation & Control. Includes Automation Hybrid Worker, Change Tracking, and Update
Management. Hybrid Runbook Worker allows customers to extend the scope of Automation to their
on-premises environments.

• Security & Compliance. Includes Security and Audit, and Antimalware Assessment.
• Backup and Site Recovery (OMS). Provides integration with Azure Backup and Azure Site Recovery.
Oversees the status of backups and monitors the replication status of systems protected by a Site
Recovery vault.

Service pricing
In April 2018, Microsoft simplified the Log Analytics pricing model and enhanced its capabilities. For
example, Microsoft increased the threshold on the amount of data that customers can transfer to their
workspace free of charge, eliminated the corresponding daily upload limit, and extended the maximum
duration of data retention period. Service pricing depends on two factors: data ingestion and data
retention. At the time of authoring this content, customers do not pay for the first 5 gigabytes (GB) of
data that Log Analytics collects per month. Similarly, there is no cost for retaining data for up to 31 days.
For an additional charge, you can extend that period to two years.

Note: Enterprise Agreement (EA) customers can choose per-node pricing instead of the
per-GB model.

Log Analytics as a component of Azure


Log Analytics has a unique role in the range of
Azure services. Its primary purpose is to facilitate
the monitoring of your existing on-premises and
cloud-based environments. It also offers
management capabilities through integration with
other Azure services such as Automation, Backup,
or Site Recovery. You can implement its
functionality through management solutions,
which you can easily add after provisioning the
core service. Log Analytics also closely integrates
with Monitor and Security Center.
MCT USE ONLY. STUDENT USE PROHIBITED
11-6 Using Microsoft Azure-based management, monitoring, and automation

Log Analytics supports direct data collection from Windows and Linux virtual machines (VMs). It also
allows you to collect and analyze VMs diagnostics data residing in Azure Storage accounts. Its scope
extends to on-premises locations and third-party cloud environments, by relying on agents deployed to
monitored computers, or by leveraging integration with Operations Manager.

Implementing Log Analytics solutions


To implement Log Analytics solutions, perform the
following steps:

1. From the Azure portal, create an Operations


Management Suite workspace by adding the
Log Analytics service to your subscription.
When you create the workspace, you will need
to specify:
o A unique name consisting of between
4 and 63 alphanumeric characters.

o The Azure subscription that will provide


the administrative and billing boundary.
o The Azure region to host the workspace.

o The resource group where the workspace will reside.

2. After you have created the workspace, you can navigate to it directly within the Azure portal.
3. In the Azure portal, you can search for management solutions in the Marketplace and, once you
identify the ones you want, add them to the workspace.
4. To collect data based on the solutions that you added, you must connect to data sources. The
method depends on the location and type of the target systems. For example:

o To add VMs that reside in the same subscription as the Operations Management Suite
workspace, use the Connect option available from the Operations Management Suite workspace
in the Azure portal. This will automatically install and configure the Log Analytics VM extension
within the operating system for these VMs. Alternatively, you can use Azure PowerShell, Azure
CLI, or Azure Resource Manager templates. To add physical computers or VMs that are not part
of your Azure subscription or your Operations Manager environment, download and install the
Microsoft Monitoring Agent on each of them. The download link is available directly from the
Log Analytics blade of the Azure portal. The installation will require you to provide the
workspace ID and one of two workspace keys (primary or secondary).

Additional Reading: For details on adding Windows computers to your Operations


Management Suite workspace, refer to: “Collect data from Windows computers hosted in your
environment” at: https://aka.ms/Jff5ue
For details on adding Linux computers to your Operations Management Suite workspace, refer
to: “Collect data from Linux computers hosted in your environment” at: https://aka.ms/ld8ze7

o To add computers that are part of your Operations Manager environment, use the Operations
Management Suite Connection from the Operations Manager console.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 11-7

o To add diagnostics data from VMs (Windows or Linux) and Azure Cloud Services web and worker
roles configured with the Azure Diagnostic VM extension, specify the Azure Storage account that
stores the data.

o To add Azure service logs and metrics that other Azure services generate, you can:
 Write service diagnostics directly to Log Analytics. This option is available with any
service that supports the Monitor functionality. To implement it, you can use the
Set-AzureRmDiagnosticsSetting cmdlet. Alternatively, you can apply an Azure Resource
Manager template that configures the workspaceId property of the service.
 Collect data from the Azure Activity Log.
 Collect diagnostics data that services store in an Azure Storage account.
 Configure connectors to provide a communication path between services and Log Analytics.
This approach allows you to collect data that Azure Application Insights generates.
 Use scripts to collect and post service data to Log Analytics.

Additional Reading: For details regarding collecting Azure service logs and metrics for use
in Log Analytics, refer to: “Collect Azure service logs and metrics for use in Log Analytics” at:
https://aka.ms/uyup65

5. Specify one or more logs from which you want to upload content to the Log Analytics Repository.
You have the option to enable data collection for Windows Event logs, Windows performance
counters, Linux performance counters, Internet Information Services (IIS) logs, Syslog, and custom
logs.

Note: By default, whenever you add a solution to your Operations Management Suite
workspace, that solution is automatically deployed to all managed computers that support it. You
have the option of implementing Scope Configurations, which limit the scope of a deployment by
using a computer group that you designate as its target. To create a computer group, you can
use Analysis Log search results or use an import from AD DS, Windows Software Update Services,
or System Center Configuration Manager. Scope Configurations are not available for solutions
that do not rely on the Log Analytics VM extension or Microsoft Monitoring Agent. It is also not
supported in scenarios where the target computers are part of an Operations Manager
management group.

Note: At the time of authoring this content, the Scope Configurations functionality is in
preview.

After data is uploaded to the Log Analytics repository, the service analyzes its content by applying logic
defined by the solutions that you added to the workspace. The portal displays the outcome of this analysis
on the Overview blade of the Log Analytics workspace. From the Log Search blade, you can run built-in
or custom searches to review collected data. For more advanced searches, use the Azure Log Analytics
portal accessible via the Advanced Analytics link on the Log Search blade. The query language supports
queries that correlate data from multiple sources. You can manipulate results by applying a variety of
operations, including filtering, aggregation, averaging, summarization, and sorting. You can also create
dashboards that visualize query results and share them with other users residing in the Azure Active
Directory (Azure AD) tenant associated with the subscription that hosts the Log Analytics deployment. In
addition, you can configure alerts that run your queries at regular intervals, evaluate their results, and
carry out a custom action according to the criteria that you define.
MCT USE ONLY. STUDENT USE PROHIBITED
11-8 Using Microsoft Azure-based management, monitoring, and automation

Additional Reading: For details on adding running queries in the Log Search portal, refer
to: “View or analyze data collected with Log Analytics log search” at: https://aka.ms/m38hp7
For details on adding visualizing data based on Log Analytics queries, refer to: “Create and share
dashboards of Log Analytics data” at: https://aka.ms/Tv9qsf
For details on configuring Log Analytics alerts, refer to: “Respond to events with Log Analytics
Alerts” at: https://aka.ms/Aak2jt

Demonstration: Implementing Log Analytics solutions


In this demonstration, you will see how to:

• Create an Operations Management Suite workspace.

• Install the Log Analytics VM extension on a VM.

• Add solutions to Log Analytics.


• Perform searches of collected data.

• Configure log collection.

Monitoring and managing security with Security Center


Security Center is a cloud-based service that
provides centralized monitoring and management
of security-related aspects of your Azure
environment. You can extend some of its
capabilities to your on-premises environment,
facilitating implementation of consistent security
policies in hybrid scenarios.

Security Center utilizes the capabilities of Log


Analytics to collect security-related data from on-
premises computers and cloud-based services that
you manage. It presents the collected data in a
cohesive and comprehensive manner directly within
the Azure portal. It assists you with implementing remediation actions that address vulnerabilities and
threats that it detects. It can also alert you about security-related events based on the criteria that you
specify.

To configure Security Center, you define a security policy and apply it to an Azure subscription. Each
policy definition includes the following settings:

• Data Collection. Use Data Collection settings to enable automatic provisioning of monitoring agents
and specify the corresponding workspace where the collected data will reside. You can use an existing
Operations Management Suite workspace or accept the default setting, which automatically creates a
new workspace.

• Security policy. Use the Security policy settings to specify the types of recommendations that
interest you. These recommendations affect the following aspects of the environment:

o System updates. This option requires that you enable data collection.

o Security configuration. This option requires that you enable data collection.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 11-9

o Endpoint protection This option requires that you enable data collection.

o Disk encryption

o Network security groups

o Web application firewall

o Next generation firewall

o Vulnerability assessment

o Storage encryption

o Just-in-time VM access

Note: Just-in-time VM access blocks access to VMs via their external endpoints. When a
user requests just-in-time VM access to a VM for a specific period, Security Center first evaluates
whether the user has sufficient Role-Based Access Control permissions to connect to that VM. If
the user has the required permissions, Security Center then automatically configures a Network
Security Group that allows inbound connectivity to the target VM for the specified duration.

o Adaptive application controls

Note: Adaptive application controls evaluate software running within VMs and create
AppLocker whitelisting rules that reflect existing application usage. You can apply these rules
directly from the Azure portal.
At the time of authoring this content, adaptive application controls are in preview.

o SQL auditing and threat detection

o SQL encryption
• Email notifications. Use the Email notifications settings to specify the contact email and phone
number to which the Microsoft Security team will deliver notifications about events that compromise
the resources within the scope of the policy.
• Pricing tier. Use the Pricing tier settings to choose between two pricing tiers and their respective
security services:

o Free tier includes security assessment, security recommendations, basic security policy, and
connected partner solutions.
o Standard tier, in addition to free tier services, includes just-in-time VM access, adaptive
application controls, network threat detection, and VM threat detection.

Note: You must switch to the Standard pricing tier to extend Security Center monitoring
and management to computers in your on-premises datacenters and non-Microsoft cloud
providers. In addition, you must provide a Log Analytics workspace where the data collected from
these computers will reside.

By default, all resource groups inherit subscription-level policies. However, you can override this
inheritance and apply a different pricing tier at the resource-group level.
MCT USE ONLY. STUDENT USE PROHIBITED
11-10 Using Microsoft Azure-based management, monitoring, and automation

After Security Center collects data according to the Data Collection settings you specified, it will provide
recommendations to enhance the security of your environment. For example, it might recommend
assigning Network Security Groups to subnets of Azure virtual networks, enabling encryption of Azure
SQL databases, or configuring Endpoint Protection within the operating system of VMs. Security Center
will guide you through implementation of these settings, redirecting you to the relevant subnets, Azure
SQL Database instances, or VMs.

You can verify the outcome of these remediation actions and evaluate the security status of your
environment by reviewing dashboards accessible from the Security Center blade in the Azure portal.
These dashboards include the following types of information:

• Events dashboard. Contains a chart representing a timeline of events that Security Center collected
and processed. It also lists these events by event type. This listing includes notable events that
Security Center marked according to both its predefined settings and settings that you configured by
using Log Analytics queries.

• Search dashboard. Provides the search capability to retrieve security data that Security Center
collected. In the Standard pricing tier, this data resides in a Log Analytics workspace that you specify.
To perform searches, regardless of the pricing tier, you use the Log Analytics query language.
• Identity & Access dashboard. Displays graphs and lists events associated with authentication and
authorization of access requests to managed resources with your Azure subscription. This allows you
to identify and mitigate brute force Remote Desktop attacks to VMs running Windows Server, for
example.

Note: At the time of authoring this content, the Identity & Access dashboard is in preview.

• Threat intelligence dashboard. Visualizes the threat intelligence stance of your environment. In this
case, the data originates not only from your resources, but also from a wide range of Microsoft
services that constantly monitor for potential global threats. The Threat intelligence dashboard
consists of three sections displaying, respectively, detected threat types, threat origin, and threat
intelligence map. This information helps you identify the nature of a threat, its origin, and any
compromised resources.

Monitoring cloud and on-premises resources with Monitor


Monitor is a core component of the Microsoft
strategy to extend comprehensive cloud-based
monitoring functionality beyond Azure to on-
premises datacenters and non-Microsoft cloud
providers. Other Azure manageability features that
are part of this strategy include:

• Advisor. Uses resource usage telemetry to


provide recommendations for optimizing
resource configuration in terms of
performance, security, and availability.

• Azure Service Health. Reports platform-related


issues that might affect your resources.

• Azure Activity Log. Tracks events representing operations that alter the state of your resources, such
as configuration changes, service health incidents, and autoscale operations.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 11-11

Monitor, Advisor, Azure Service Health, and Azure Activity Log complement several other services that
deliver more focused, in-depth monitoring capabilities:

• Deep infrastructure monitoring. In addition to detailed monitoring, these services also provide
analytics capabilities targeting Azure infrastructure. Some prime examples include Log Analytics
combined with such management solutions as Container Monitoring or Service Map, in addition to
network monitoring tools such as Network Watcher, Network Performance Monitor, ExpressRoute
Monitor, DNS Analytics, and Service Endpoint Monitor.
• Deep application monitoring. This category includes Application Insights, which facilitates monitoring
of performance, availability, and usage of web-based applications, regardless of their location.

Both core and deep monitoring services share a number of capabilities that provide a consistent approach
to configuring alerts. Common action groups allow you to designate alert-triggered actions and
recipients, design custom dashboards, and analyze metrics by leveraging tools such as Metrics Explorer or
Microsoft Power BI.

As described in previous modules, you can configure and view performance-related settings, such as
monitoring, diagnostics, and autoscaling, for individual Azure resources directly from their respective
blades in the Azure portal. While this approach offers simplicity, it can be inefficient for larger number of
resources. With Monitor, you have a single point of reference for most of the relevant configuration
settings and monitoring data. This not only improves user experience but also helps maintain consistent
configuration across your entire subscription.

Monitor supports collection and monitoring of metrics, activity and diagnostics logs, and events from a
wide range of Azure services and computers residing in on-premises datacenters and non-Microsoft cloud
providers. It provides a quick way to assess the status of your environment in the Azure portal. It presents
a summarized view of triggered alerts, activity log errors, Azure Service Health events, and data
originating from Application Insights. You can also access its data by using Azure PowerShell, Azure CLI,
REST API, and .NET software development kit (SDK). Additionally, Monitor allows you to archive collected
data for historical analysis or compliance purposes in Azure Storage or route it to Azure Stream Analytics
or non-Microsoft services via Event Hub.

Additional Reading: For an up-to-date list listing of data types and their respective
resources available in Monitor, refer to: “Consume monitoring data from Azure” at:
https://aka.ms/Wucvsw

Monitor also offers comprehensive support for alerting. It allows you to configure four types of alerts:

• Classic metric alerts with a minimum frequency of five minutes.

• Near real-time metric alerts with a minimum frequency of one minute. A change in metrics that
satisfies the alert condition will trigger a metric-based alert within one minute. This makes the
Monitor-based approach suitable for time-critical scenarios. Near real-time metrics offer several other
advantages:
o Support for action groups, which are collections of settings that designate recipients of alert
notifications and the corresponding notification actions. The action types include initiating a
voice call or a text, sending an email, calling a webhook, forwarding data to an IT Service
Management tool such as ServiceNow, calling an app built by using the Web Apps feature of
Azure App Service, or invoking an Automation runbook. Creating action groups allows you to
reuse the same notification settings for multiple alerts.

o Alerts based on conditions of two or more metrics.


MCT USE ONLY. STUDENT USE PROHIBITED
11-12 Using Microsoft Azure-based management, monitoring, and automation

o Multidimensional metric-based alerts that allow you to generate alerts based on one or more
dimensions of a metric. A dimension identifies a subset of related metrics based on a key-value
pair. For example, for a Windows Server instance, the metric Available disk space can have a
dimension named Drive, with its values representing individual drive letters.
o Alerts that support conditions such as average and total, in addition to minimum and maximum
values, are available with the classic metric alerts.

• Classic activity log alerts parsing streaming log data, responding to events such as a Service Health
incident or deletion of a VM.

• Activity log alerts (Preview), which function similarly to classic activity log alerts but support
configuration by using Azure Resource Manager templates.

Note: At the time of authoring this content, Monitor supports alerts that combine
monitoring of up to two metrics.

Additional Reading: For a walkthrough of Monitor, refer to the “Walkthrough” section of


“Get started with Azure Monitor” at: https://aka.ms/Vhqo50

Implementing Azure governance with Azure Resource Manager


You can enhance the manageability of your Azure
environment considerably by utilizing features of
the Azure Resource Manager deployment model
introduced in Module 1, “Introduction to Azure”.
These features include Role-Based Access Control
(RBAC), Azure policies and initiatives, and locks.

RBAC allows you to delegate the ability to carry


out specific actions on individual Azure resources.
The actions are part of the definition of a role,
which resides in the Azure AD tenant associated
with the subscription that is hosting the resources.
There are a number of predefined roles but you
can also create custom roles. To implement the delegation, you must first choose a role that best fits your
delegation model. Then you assign that role to a user, group, service principal, or managed service
identity (MSI) residing in the same Azure AD tenant that hosts the role definition. Finally, you designate
the scope of role assignment. The scope can be a single resource, a resource group, a single subscription,
or a management group, which is a collection of subscriptions sharing the same Azure AD tenant. Azure
management groups support nesting, allowing you to create a hierarchy of management groups. The
management group tree can be up to six levels deep and have up to 10,000 management groups within
the same Azure AD tenant.

Note: At the time of authoring this content, management groups are in public preview.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 11-13

RBAC includes three predefined roles which are not resource-specific:

• Owner. Can perform all actions within the target scope.

• Contributor. Can perform all actions within the target scope, with the exception of delegating or
revoking access.
• Reader. Can view resource settings, except for their secrets, such as the keys of an Azure storage
account, for example.

Also, RBAC includes many resource-specific roles that let you perform resource-specific actions. For
example, the Virtual Machine Contributor role defines actions that are necessary to manage a VM, but
does not include the ability to sign in to its operating system. Similarly, the Storage Account Contributor
role defines actions necessary to manage an Azure storage account but not to access its content.

Additional Reading: For information regarding creating custom roles, refer to: “Create
custom roles for Azure Role-Based Access Control” at: https://aka.ms/rz30ze

The Azure Policy service allows you to enforce standards within your Azure subscription by using policies.
Their implementation involves two steps. The first step is creating a policy definition, which describes
conditions that determine policy compliance and resulting actions that depend on the outcome of
evaluation of these conditions. The second step is assigning the policy definition to a target scope. This
scope can be a resource group, a subscription, or a management group.

You can combine multiple policies into initiatives. This allows you to reuse existing policies to address
more complex governance requirements. If you decide to take this approach, you must assign an initiative
to the target scope, instead of performing a policy assignment.

Once the policy is in place, provisioning of a new resource or changes to an existing resource within the
scope of the assignment are subject to its rules. In addition, you can view the compliance status of your
initiatives, policies, and resources.

Note: You can create policy exclusions, which block a child scope from the effects of the
policy you assigned to the parent scope.

Azure Policy includes several built-in policies, which represent common use cases:

• Allowed Locations. Restricts Azure regions that can host resources within the scope of the policy.
• Allowed Resource Type. Restricts types of resources that are available for deployment within the
scope of the policy.
• Allowed Storage Account SKUs. Restricts Azure Storage account stockkeeping units (SKUs) that are
available for deployment or conversion within the scope of the policy.

• Allowed Virtual Machine SKUs. Restricts Azure Storage account SKUs that are available for
deployment or resizing within the scope of the policy.

• Apply tag and its default value. Identifies whether resources within the scope of the policy contain a
specific tag. Also identifies whether a user who provisioned or modified these resources assigned
them tags, and, if not, automatically assigns these tags.

• Enforce tag and its value. Identifies whether resources within the scope of the policy contain a specific
tag and, if not, automatically assigns it.
MCT USE ONLY. STUDENT USE PROHIBITED
11-14 Using Microsoft Azure-based management, monitoring, and automation

• Not allowed resource types. Prohibits deployment of specific resource types within the scope of the
policy.

• Require SQL Server 12.0. Enforces the use of a specific version of Azure SQL Database.

To implement these policies, you simply need to assign values to their parameters, such as specific Virtual
Machine SKUs that you want to allow, and then assign the policy to the scope where it should take effect.
You can perform these actions directly from the Azure portal, or use Azure PowerShell or Azure CLI.

Policies take the form of JavaScript Object Notation (JSON)–formatted policy definition files with a
relatively straightforward syntax. The following shows a sample policy that ensures that provisioning of all
resources takes place in the East US Azure region:

{
"if" : {
"not" : {
"field" : "location",
"in" : ["eastus"]
}
},
"then" : {
"effect" : "deny"
}
}

Once you have a JSON file containing the intended policy, you must create a policy definition object. To
accomplish this, you can use the Azure PowerShell New-AzureRmPolicyDefinition cmdlet or Azure CLI
az policy definition create command. After you create a policy definition, you must assign it to a scope
where the policy will apply, such as a resource, resource group, or an entire subscription. You can use the
Azure PowerShell New-AzureRmPolicyAssignment cmdlet or Azure CLI az policy assignment create
command for this purpose.
To increase the flexibility of the policy definition, you can replace the hard-coded name of the target
Azure region with a parameter in the form of an array of strings representing allowed Azure regions:

{
"properties": {
"mode": "all",
"parameters": {
"allowedLocations": {
"type": "array",
"metadata": {
"description": "The list of allowed Azure regions ",
"strongType": "location",
"displayName": "Allowed locations"
}
}
},
"displayName": "Allowed locations",
"description": "This policy restricts allowed Azure regions",
"policyRule": {
"if": {
"not": {
"field": "location",
"in": "[parameters('allowedLocations')]"
}
},
"then": {
"effect": "deny"
}
}
}
}
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 11-15

The mode element designates the types of resources that the policy will evaluate. It can take one of the
following two values:

• indexed. The resulting policy will be capable of evaluating only the resource types that support tags
and location properties.

• all. The resulting policy will be capable of evaluating all resource types and resource groups. This
extends the scope of policies to such resources as subnets, VM extensions, or Azure SQL Database
audit settings.

Note: You should set the mode element to all for your policies to ensure that they apply to
all resource types.

The strongType element within the metadata property of the parameters element automatically
generates a multiple-selection list when viewing the policy within the Azure portal.

The effect element supports a number of options:

• Deny. Results in a failed request and generates the corresponding entry in the audit log.

• Audit. Implements the request but generates a warning in the audit log.

• Append. Implements the request and automatically adds settings defined in the policy as part of the
implementation.
• AuditIfNotExists. Triggers a deferred evaluation during a resource deployment on other resources,
including child resources, if they were not subject to a policy during their deployment. For example,
when deploying an VM with an antimalware VM extension, you can trigger an automatic audit of
other VMs in the same resource group for the presence of that extension.

• DeployIfNotExists. Deploys an Azure Resource Manager template–based resource if one does not
already exist within the scope of the current deployment. For example, you can force deployment of
an antimalware VM extension if it is not included in the VM deployment.

Additional Reading: For more information regarding the structure of Azure Policy
definitions, refer to: “Azure Policy definition structure” at: https://aka.ms/Oqjg5d

Azure Policy has two pricing tiers:

• Free. This allows you to apply policies to resources that you subsequently deploy or modify.

• Standard. This additionally allows you to apply policies to existing resources, which enables you to
evaluate their compliance status.

The primary purpose of locks is to prevent accidental modification or deletion of resources. There are two
types of locks:

• Readonly. This lock prevents modification within the scope where the lock is assigned.

• CanNotDelete. This lock prevents deletion within the scope where the lock is assigned.
You can assign a lock on a resource, a resource group, or a subscription. The most straightforward way to
create and assign a lock is directly from the Azure portal. You will find the Lock entry on the blade of each
resource, resource group, and subscription. To activate a lock, you will need to specify its name and type.
Optionally, you can add notes to document your configuration change. You can also apply locks via Azure
PowerShell, Azure CLI, an Azure Resource Manager template, or REST API.
MCT USE ONLY. STUDENT USE PROHIBITED
11-16 Using Microsoft Azure-based management, monitoring, and automation

Check Your Knowledge


Question

Which of the following are the three RBAC built-in roles that apply to all resource types?

Select the correct answer.

Service Administrator

Owner

Contributor

Co-Administrator

Reader
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 11-17

Lesson 2
Implementing Automation
In this lesson, you will learn about the architecture, capabilities, and main components of Automation.
You will learn about the process of creating an Automation account and its assets. In addition, you will
become familiar with extending the scope of Automation to on-premises systems by leveraging Hybrid
Runbook Workers.

Lesson Objectives
After completing this lesson, you should be able to:

• Identify the role of Automation in the context of the overall Azure offering.

• Describe the architecture of Automation and list its components.

• Explain how to create an Automation account and its assets.

• Describe how to use Automation runbooks on-premises.


• Create an Automation account and its assets.

Introducing Automation
Automation has undergone significant
enhancements since its introduction as a cloud-
based service. Initially, its capabilities were limited
to managing Azure-resident services. It relied
exclusively on Azure PowerShell workflows, which
information technology (IT) professionals would
typically author via the Azure portal–based text
editor. Since then, not only has Automation
become available on-premises, but it is possible to
use it with Windows PowerShell scripts, which are
considerably more familiar to a typical IT
professional. Also, you can now author workflows
by using a graphical editor, directly on the Azure portal. In addition, both VMs and on-premises systems
can benefit from the support for Desired State Configuration (DSC). DSC integrates with Automation,
ensuring that the state of managed resources does not change over time in an uncontrolled manner.

Note: At the time of authoring this content, support for Python 2–based runbooks is in
public preview.

The core component of Automation is an account. An Automation account serves as a container of


automation components, such as Azure PowerShell modules, scripts, and workflows, or credentials and
certificates used to connect to other Azure services. You can create multiple Automation accounts per
Azure subscription. This allows you to separate management of your development and production
environments, with each containing different settings. You can define these settings by creating assets,
which include PowerShell modules, credentials, certificates, connections, schedules, and variables.
MCT USE ONLY. STUDENT USE PROHIBITED
11-18 Using Microsoft Azure-based management, monitoring, and automation

When working with Automation, another term that you will encounter often is activity. You might find this
term confusing, because it appears in two distinct contexts. The first one refers to PowerShell workflow
activities. Workflow activities use the same verb-noun combination as PowerShell cmdlets, but their
internal implementation differs, since they rely on Windows Workflow Foundation. As a result, there are
some unique rules that dictate how you use PowerShell workflow activities. We will explore these rules in
the next lesson of this module. The second meaning of the term activity is generic and represents an
individual automation task that you implement, which typically refers to either a PowerShell cmdlet or a
PowerShell workflow activity.

Assets and activities become building blocks of PowerShell workflows and scripts, which result in the
creation of Automation runbooks. Runbooks deliver the core functionality of the Automation service,
executing your custom tasks either on demand or according to the schedule that you specify. Each unit of
runbook execution is referred to as a job.

You can also take advantage of Automation by using PowerShell DSC. This technology, introduced in
PowerShell 4.0, allows you to define a configuration that you want to apply to managed computers and
then deliver this configuration to them in the push or pull manner. Push indicates that you actively deploy
the configuration to target computers. With the pull approach, target computers periodically copy the
configuration from a designated location, known as a pull server. Automation allows you to create such
configurations, store them on an Azure-resident DSC pull server, and apply them to VMs.

Automation runbooks run in Azure, so, by default, they cannot directly target your on-premises resources.
However, it is possible to accomplish this by deploying intermediary systems known as Hybrid Runbook
Workers. These systems, which operate typically in groups for resiliency reasons, reside on your local
network and communicate with Automation to execute its runbooks against computers on the same
network.

Note: To implement Hybrid Runbook Workers, you must integrate Automation with Log
Analytics and its workspace.

Automation as a component of Azure


Automation is an Azure service with the primary
objective of automating a variety of repetitive and
long-running tasks, both in Azure and on-
premises. With the introduction of the Desired
State Configuration component, Automation also
allows you to maintain consistent configuration of
managed resources.

Automation relies on PowerShell scripts and


workflows to provide the automating functionality.
As a result, you can implement any noninteractive
procedure, as long as it is possible to script it with
PowerShell. Your inventiveness and scripting skills
play a significant role in determining how Automation benefits you. Among the more common
Automation scenarios are scheduled provisioning and deprovisioning of VMs. Workflows provide
additional resiliency, automatically resuming any interrupted tasks. As mentioned in the previous lesson,
Automation integrates closely with Log Analytics.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 11-19

Creating Automation accounts and assets


To implement Automation, you must first create
an Automation account. The Automation account
defines the scope of other Automation
components, including assets and runbooks.

You can create an Automation account by


provisioning any of the following resources:
• The Automation & Control bundle of Log
Analytics management solutions from the
Marketplace. This method creates and links
together a new Automation account and a
new Operations Management Suite
workspace. Log Analytics then collects
automation-related logs and diagnostics data, simplifying its analysis.

• The Automation service from the Marketplace. This method creates a new Automation account
without a corresponding Operations Management Suite workspace. You can link the Automation
account to an Operations Management Suite workspace afterwards, if they are both part of the same
resource group.
• Some of the individual Log Analytics Management solutions. With this method, you first select a
Marketplace solution, such as Update Management, Start/Stop VMs during off hours, or Change
Tracking. Your selection will trigger a prompt, asking you to either specify an existing Automation
account and Operations Management Suite workspace or create new ones, if preferred.
After you create an Automation account, you can start populating it with Automation assets. Assets
represent configurable components that you can use to build Automation runbooks. The assets are
grouped into the following six categories:
• Modules. PowerShell modules imported into an Automation account. Modules determine the groups
of cmdlets and activities that are available when you create PowerShell scripts and workflows. By
default, any newly created account contains a number of PowerShell modules, including Azure,
Azure.Storage, AzureRM.Automation, AzureRM.Compute, AzureRM.Profile, AzureRM.Resources,
AzureRM.Sql, AzureRM.Storage, Microsoft.PowerShell.Core, Microsoft.PowerShell.Diagnostics,
Microsoft.PowerShell.Management, Microsoft.PowerShell.Security, Microsoft.PowerShell.Utility,
Microsoft.WSMan.Management, and Orchestrator.AssetManagement.Cmdlets.

Note: Both Service Management and Azure Resource Management modules are available,
which means that Automation supports both deployment models.

In the context of Automation, PowerShell modules are referred to as integration modules. This unique
naming convention indicates one important distinction between the two constructs. While both types
of modules must contain at least one .psd1, .psm1, or .dll file (which implements the actual cmdlets),
an integration module might also contain a metadata .json file. This JavaScript Object Notation
(JSON) file defines the Azure connection type that Automation should use when accessing module-
specific resources.

• Schedules. By using schedules, you can execute runbooks automatically (rather than on demand),
either once at a designated date and time, or in a recurring manner.
MCT USE ONLY. STUDENT USE PROHIBITED
11-20 Using Microsoft Azure-based management, monitoring, and automation

• Certificates. This category consists of certificates uploaded to an Automation account. One common
reason for using them is to facilitate certificate-based authentication. To retrieve the value of a
certificate asset, use the Get-AutomationCertificate activity.

• Connections. Connections contain the information required for a runbook to authenticate to an


Azure subscription or an external service or application. The type of information depends on the
authentication mechanism. You can access connection properties in the runbook with the
Get-AutomationConnection activity. Connection type definitions are included in the integration
modules that deliver related PowerShell functionality. To make a specific connection type available,
you need to import the module that contains the connection type definition.

• Variables. This category contains values that you can reference in your scripts. By using variables, you
avoid the need to modify your runbooks directly (potentially multiple times) if the referenced value
changes. Variables are also useful for sharing values between runbooks or sharing values between
multiple jobs executing the same runbook. To retrieve variables, use the Get-AutomationVariable
activity.
• Credentials. Credentials consist of a user name and password combination. To retrieve a credential
within a runbook, you can use the Get-AutomationPSCredential activity. The credential must
represent an Azure AD account, because Automation does not support Microsoft accounts.
It is possible to encrypt content related to some of the Automation assets, including credentials,
connections, and variables. Once the encryption takes place, to retrieve the protected content, you must
use runbook activities rather than the corresponding PowerShell cmdlets.

Using Automation runbooks on-premises


Automation has direct access to Azure-hosted
services. For Automation to manage on-premises
resources in the same way, you would have to open
on-premises networks to inbound traffic originating
from Azure. Because this approach is not feasible in
almost all cases, Automation offers a solution that
does not introduce negative security implications.
To implement it, you must deploy on-premises
agents referred to as Hybrid Runbook Workers.
These agents establish outbound, persistent
connections to Azure.

Hybrid Runbook Workers require servers running


Windows Server 2012 or newer and use Microsoft Management Agent to communicate with both
Automation and Log Analytics. The former delivers core automation components, which include runbooks
and the execution parameters and instructions associated with them. The latter handles monitoring and
agent maintenance.

To ensure resiliency and scalability, you typically deploy Hybrid Runbook Workers in groups, though it is
possible to have a single worker in a group. You reference the group name when you start a runbook.
Automation automatically designates one of the group members to execute the corresponding job.

The process of deploying a Hybrid Runbook Worker consists of the following tasks:

1. Create an Operations Management Suite workspace or identify an existing one.

2. Add the Automation solution to the Operations Management Suite workspace.


MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 11-21

3. Install the Microsoft Management Agent on the on-premises computer running Windows Server 2012
or newer, which will be serving the Hybrid Runbook Worker role.

4. Run the Add-HybridRunbookWorker PowerShell cmdlet on the Hybrid Runbook Worker computer
to establish its communication with the workspace. The cmdlet is part of the HybridRegistration
PowerShell module, which Hybrid Runbook Worker downloads automatically once you add the
Automation solution to the Operations Management Suite workspace. The cmdlet includes, as one of
its parameters, the name of the group of which the Hybrid Runbook Worker will become a member.
If the group does not exist, it is created at this point. The remaining parameters are the Automation
account URL and its access key, which you can retrieve from the Automation account blade in the
Azure portal.
To run an Automation runbook on-premises, you must specify the Run on option (either via the
Azure portal interface or by including the –RunOn parameter when invoking the Start-
AzureAutomationRunbook cmdlet) and specify the name of the target Hybrid Runbook Worker
Group as its value. In addition, you will likely need to install PowerShell modules that the runbook relies
on during its execution, because these are not automatically deployed to the worker computer.

Additional Reading: To manage Hybrid Runbook Worker Group systems by using


Automation DSC, you need to configure them as DSC nodes. For details regarding this
configuration, refer to: “Onboarding machines for management by Automation DSC” at:
https://aka.ms/qtdos6

Demonstration: Creating an Automation account and assets


In this demonstration, you will see how to:

• Create an Automation account.


• Create an Automation Variable asset.

• Create an Automation Schedule asset.

Check Your Knowledge


Question

You need to be able to execute Automation runbooks on your on-premises computers. What
additional Azure service do you need to configure?

Select the correct answer.

ExpressRoute

Log Analytics

Service Bus

Cloud service

App Service
MCT USE ONLY. STUDENT USE PROHIBITED
11-22 Using Microsoft Azure-based management, monitoring, and automation

Lesson 3
Implementing Automation runbooks
In this lesson, you will learn about implementing Automation runbooks. In particular, you will learn about
the types of Automation runbooks, and the process of authoring each of them. In addition, you will
become familiar with the implementation of DSC, which relies on Automation.

Lesson Objectives
After completing this lesson, you should be able to:

• Describe the different types of Automation runbooks.

• Explain how to create graphical Automation runbooks.


• Explain how to create basic PowerShell workflows by using sequences, checkpoints, and parallel
processing.

• Explain how to author PowerShell workflow runbooks.


• Explain how to author PowerShell runbooks.

• Describe how to implement DSC that leverages Automation.

• Author Automation runbooks by using the graphical interface.

Introduction to Automation runbooks


Runbooks deliver the core functionality of
Automation by serving as containers for your
custom scripts and workflows. Runbooks typically
reference Automation assets, such as credentials,
variables, connections, and certificates. They also
can contain other runbooks, allowing you to build
more complex runbooks in a modular manner.
You can invoke and run runbooks either on
demand or according to a custom schedule.

In general, there are two types of Automation


runbooks, based on how you create and edit their
content:

• Graphical. You can create and edit graphical runbooks only by using the graphical editor interface
available in the Azure portal.

• Textual. You can create and edit textual runbooks either by using the textual editor available in the
Azure portal, or by using any PowerShell or text editor and importing the runbooks into Azure
afterwards.

You can also categorize Automation runbooks by whether they contain PowerShell scripts or workflows.
You can implement both by using either textual or graphical runbooks.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 11-23

Your choice of a runbook type is important, because it is not possible to perform conversion between the
graphical and textual types. You can, however, convert between graphical PowerShell runbooks and
graphical PowerShell workflows when importing them into an Automation account. Other considerations
include:
• Graphical runbooks simplify implementing PowerShell runbooks and workflows. For workflows, they
offer built-in visual elements representing checkpoints and parallel processing.

• PowerShell workflow–based runbooks take longer to start because they must be compiled first.

In addition to authoring, you can also export and import runbooks, which provides a convenient method
of copying them across Automation accounts. This approach is available for both graphical and textual
runbooks.

Graphical authoring of Automation runbooks


Graphical authoring of Automation runbooks
simplifies the creation of PowerShell-based scripts
and workflows. Authoring relies on the graphical
editor available in the Azure portal. It involves
selecting visual elements representing different
graphical library items and arranging them on the
canvas within the editor window.
The Library control displays all available library
items, which are grouped into four sections:

• Cmdlets. Lists all the available PowerShell


cmdlets organized according to the PowerShell
module to which they belong. In this section,
you will find all PowerShell modules that will be available to you during runbook creation, including
custom ones that you imported into the Automation account.

• Runbooks. Includes all runbooks within the current Automation account. You have the option of
adding these runbooks to the canvas as child runbooks.

• Assets. Provides easy access to all assets in the current Automation account.

• Runbook Control. Allows you to incorporate custom code within the current runbook and add
junctions that dictate the flow of execution, combining multiple, parallel execution paths into one.
Custom code gives you the ability to implement functionality that built-in library items do not offer.

Once you drop a library item onto the canvas, you can use the Configuration pane to configure its
individual settings, such as its label, retry behavior, or, in case of custom code, the actual PowerShell script
or workflow. The editor interface also includes the Test control, which gives you the ability to test the
execution of the runbook that you are currently editing.
MCT USE ONLY. STUDENT USE PROHIBITED
11-24 Using Microsoft Azure-based management, monitoring, and automation

Overview of PowerShell workflows


A workflow is a sequence of steps optimized for
long-running tasks. A workflow can also refer to
multiple steps across multiple managed nodes,
such as VMs. PowerShell workflows largely
resemble traditional PowerShell scripts, because
they use the same verb-noun syntax for their
activities. There is also an identically named
PowerShell cmdlet for most activities. However,
PowerShell workflows and scripts function
differently in several important ways.

In particular, one of the unique characteristics of


workflows is the ability to recover automatically
from failures that could be the result of, for example, reboots of managed nodes. Checkpoints make this
automatic recovery possible. Checkpoints designate points in the workflow where the workflow engine
should save the current status of the execution. In addition, workflows can execute groups of commands
in parallel, instead of sequentially, as in typical PowerShell scripts. This is useful for runbooks that perform
several independent tasks that take a significant time to complete, such as provisioning a collection
of VMs.

Checkpoints also address the side effects of the Automation throttling mechanism known as Fair Share.
This mechanism temporarily unloads any executing runbook, interrupting its execution after it has been
running for three hours. When Fair Share restarts the runbook afterwards, it resumes its execution from its
most recent checkpoint or, if one does not exist, from the beginning. The latter would likely result in the
runbook execution being interrupted again after three hours. If a runbook restarts from the same
checkpoint or from the beginning three consecutive times, Fair Share terminates it permanently with the
failed status. You should consider this behavior when authoring your automation runbooks.

Additional Reading: For more information, refer to: “PowerShell Workflows: The Basics” at:
http://aka.ms/Wlt7zp

Textual authoring of PowerShell workflow runbooks


PowerShell workflows start with the keyword
Workflow, followed by the script body enclosed
in braces, as follows:

Workflow Test-Workflow1
{
<Activities>
}

The keyword Parallel initiates a script block


containing multiple activities that will run
concurrently (enclosed in braces). The keyword
ForEach –Parallel designates concurrent
processing of items in a collection. This allows you
to ensure that a sequence of activities in a script block that follows ForEach –Parallel will run in parallel
for each item in the collection. The keyword Sequence enforces sequential processing of activities that
reside within a parallel script block.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 11-25

In the following example, activities A and B (and the sequence C-D) will execute in parallel, and there is no
way to know in advance which of these activities will complete first. Activities C and D will always execute
in sequence (first C, then D), but might execute before activity A or activity B:

Workflow Test-Workflow2 {
Parallel {
Activity A
Activity B
Sequence {
Activity C
Activity D
}
}

In general, it is likely that you will not be able to copy an existing PowerShell script and implement it
directly as a PowerShell workflow without making any modifications. It might be necessary to perform
some level of conversion, by translating PowerShell cmdlets into their corresponding PowerShell workflow
activities and accounting for differences between the two technologies. For PowerShell cmdlets that you
cannot easily map to workflow activities, you can use the InlineScript construct, which is effectively a
PowerShell script block inside your workflow. The keyword InlineScript designates a block of PowerShell
cmdlets that run in a separate, non-workflow session, returning the final result to the workflow. The
PowerShell engine, not Windows Workflow Foundation, processes the content of an InlineScript block:

InlineScript {
Non-mapped cmdlets
}

Checkpoints are snapshots of the current state of the workflow, including the current values for runbook
variable assets. Checkpoints are saved to the Automation database, so that workflows can resume after an
interruption or outage. You set checkpoints with the Checkpoint-Workflow activity. You can use the
Suspend-Workflow activity to force a runbook to suspend, and set a checkpoint. This is useful for
runbooks that need some intermediate manual steps.

To create a new PowerShell workflow–based runbook from the Azure portal, navigate to the Add
Runbook blade within your Automation account, click the Create a new runbook option, and then
specify the runbook name (which must start with a letter, but might include numbers, underscores, and
dashes). Depending on whether you want to create a textual or graphical runbook, select PowerShell
Workflow or Graphical PowerShell Workflow as the runbook type.

Authoring PowerShell workflow–based textual runbooks typically involves a combination of the following
steps:

• Write code directly in the textual editor window within the Azure portal.

• Add PowerShell cmdlets contained in the PowerShell modules imported into your Automation
account.

• Reference Automation assets (including variables, connections, credentials, or certificates) by using


either Get or Set activities. For example, to reference a value of an Automation variable asset, you
would right-click it, and then click either Add “Get Variable” to canvas or Add “Set Variable” to
canvas. This would automatically add the Get-AutomationVariable activity to the canvas, with the
–Name parameter set to the value of this variable.

• Add runbooks of the same type (meaning either graphical or PowerShell workflow textual) to the
canvas. This adds the reference to this runbook within the editor window, which results in invoking
the imported runbook during execution of the currently edited one. For example, if you add a
PowerShell workflow runbook named Runbook1 to the canvas, it would appear in the editor window
as a separate line in the format Runbook1.ps1.
MCT USE ONLY. STUDENT USE PROHIBITED
11-26 Using Microsoft Azure-based management, monitoring, and automation

Textual authoring of PowerShell runbooks


Authoring PowerShell runbooks typically involves
a combination of the following steps:

• Write code directly in the textual editor


window within the Azure portal.

• Add PowerShell cmdlets contained in the


integration modules imported into your
Automation account.

• Reference Automation assets, including


variables, connections, credentials, or
certificates, by using either Get or Set
activities. For example, to reference a value of
an Automation variable asset, you would right-click it, and then click either Add “Get Variable” to
canvas or Add “Set Variable” to canvas. This would automatically add the Get-
AutomationVariable activity to the canvas, with the –Name parameter set to the value of this
variable.
• Add runbooks of the same type to the canvas. This adds the reference to the runbook within the
editor window, which results in invoking the imported runbook during execution of the currently
edited one. For example, if you add a PowerShell runbook named Runbook1 to the canvas, it would
appear in the editor window as a separate line in the format .\Runbook1.ps1.

To create a new textual PowerShell runbook from the Azure portal, navigate to the Add Runbook blade
within your Automation account. Click the Create a new runbook option, specify the runbook name
(which must start with a letter, but might include numbers, underscores, and dashes), and ensure that you
select PowerShell as the runbook type.

Implementing Automation DSC


DSC allows you to define the desired state of an
operating system or application. To enforce the
desired state, you must apply this definition to one
or more managed computers via either the push
or the pull method. You use the push method by
deploying a compiled definition from a central
management point. With the pull method, you
copy the compiled definition to a designated pull
server that managed systems point to as their
configuration source.

Automation supports PowerShell DSC in the pull


mode, implementing all of its components in the
cloud. It is capable of managing VMs running Windows and Linux, on-premises computers, and VMs
hosted by other cloud providers.

Azure DSC implementation process starts with creating a configuration script (a .ps1 file) that represents
the desired state of managed computers. Configuration contains one or more nodes, which represent
individual roles that you want to manage. You must add the configuration to the Automation account, by
using either the Azure portal or Azure PowerShell. Just like PowerShell scripts and workflows, the
configuration script can reference Automation assets.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 11-27

The scope of functionality that you can manage with Automation DSC depends on DSC resources that the
Automation account includes. While there is a set of built-in resources that match those in standard
PowerShell DSC, it is possible to import additional resources if needed, by uploading PowerShell
integration modules containing their definitions. The upload functionality is available from the Azure
portal and by using Azure PowerShell.
Next, you need to compile DSC configuration by clicking the Compile link in the configuration blade in
the Azure portal, or by invoking the Start-AzureRmAutomationDscCompliationJob cmdlet. When
using PowerShell, you have the option to specify configuration data during compilation. This allows you
to assign different configurations, depending on the computers that you intend to target. For example,
you can enforce one set of settings on the production system and another in a test environment.
Compilation generates one or more Managed Object Format (MOF) files containing node configurations,
which are automatically uploaded to a DSC pull server residing in Azure (along with non-default DSC
resources). For these configurations to take effect, you need to add (or onboard, in DSC nomenclature)
target computers as DSC-managed nodes into your Automation account. You can perform the
onboarding process from the Azure portal or by using Azure PowerShell.

As part of onboarding, you will need to specify registration settings, including:

• Node configuration name. This setting specifies the name of the configuration node.
• Refresh frequency. Its value determines how often the nodes communicate with their DSC pull server.

• Configuration mode frequency. Its value determines how often nodes apply configuration mode to
their local resources.
• Configuration mode. It can take one of the following values:

o ApplyAndMonitor. Applies the configuration and monitors any subsequent deviations from the
desired state, recording them in logs.
o ApplyOnly. Applies the configuration once.

o ApplyAndAutoCorrect. Applies the configuration and fixes any subsequent deviations from the
desired state, recording them in logs.
When using Azure PowerShell for node onboarding, you must also provide:

• Registration URL. This setting is available from the Manage Keys blade in the Automation account in
the Azure portal.
• Automation account registration primary or secondary key. This setting is available from the Manage
Keys blade in the Automation account in the Azure portal.

Additional Reading: For more information regarding syntax of DSC configurations, refer
to: “DSC Configurations” at: http://aka.ms/Vy5n2q

Additional Reading: For more information regarding onboarding Automation DSC nodes,
refer to: “Onboarding machines for management by Azure Automation DSC” at:
http://aka.ms/Lmsccl
MCT USE ONLY. STUDENT USE PROHIBITED
11-28 Using Microsoft Azure-based management, monitoring, and automation

Demonstration: Graphical authoring of Automation runbooks


In this demonstration, you will see how to:

• Create a graphical Automation runbook.

• Configure authentication in a graphical Automation runbook.


• Add an activity to start a VM.

Check Your Knowledge


Question

You plan to author an Automation runbook that, according to your estimates, will take seven hours to
complete. What should you do to ensure that the runbook successfully executes?

Select the correct answer.

Create a PowerShell script–based runbook.

Create a PowerShell workflow–based runbook with a single checkpoint.

Create a PowerShell workflow–based runbook with two checkpoints.

Create a PowerShell workflow–based runbook with a single Inlinescript element.

Create a PowerShell workflow–based runbook with two Inlinescript elements.


MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 11-29

Lesson 4
Implementing Automation–based management
In this lesson, you will learn about the most common Automation management tasks, focusing on
runbook lifecycle management. The lifecycle includes testing, publishing, and scheduling automation
runbooks, in addition to monitoring and troubleshooting Automation jobs. You will also explore common
troubleshooting methods, which you can use to enhance resiliency of your Automation environment.

Lesson Objectives
After completing this lesson, you will be able to:

• Describe the lifecycle of Automation runbooks.

• Describe the process of testing, publishing, and executing Automation runbooks.

• Explain how to monitor and troubleshoot runbook execution.

• Explain how to protect the Automation environment.


• Test, publish, execute, and monitor an Automation runbook.

Automation runbook lifecycle


Any runbook residing in an Automation account
has a specific authoring status, depending on its
stage of development:
• A newly created runbook that you have not
yet published is assigned the New authoring
status automatically. In this stage, you can
modify and test it, but you cannot schedule its
execution. You also do not have the option to
revert any changes that you save.
• Once you successfully complete testing on a
runbook, you can publish it, which
automatically assigns the Published
authoring status. This is the typical status of a production-ready runbook. At this point, you can
schedule its execution. In addition, it is possible to start a published runbook by submitting an HTTP
POST request to a URL referred to as webhook. You can create a webhook via the Azure portal or
Azure PowerShell.

• If you decide to make changes to an existing, published runbook and open it in the textual or
graphical editor, it will be assigned the In edit status. This allows you to modify and test it. Any
changes that you save do not affect the published version. In addition, you have the option to revert
the edited version back to the published one.

You can easily identify the current status of any runbook from the Runbooks blade in the Azure portal, by
reviewing the AUTHORING STATUS column.
MCT USE ONLY. STUDENT USE PROHIBITED
11-30 Using Microsoft Azure-based management, monitoring, and automation

Testing, publishing, and executing Automation runbooks


To test a runbook, in the Azure portal, click Test
pane in the toolbar of the graphical or textual
editor blade. Testing allows you to validate
runbook operation before making the runbook
available for production use. This is possible
without overwriting an existing, published version.
Depending on the runbook type, you can initiate
testing from the graphical or textual editor, and
monitor results of its execution in the Output
blade. It is important to note that, during a test,
the edited runbook will actually run its activities. In
other words, testing is not functionally equivalent
to the –WhatIf PowerShell switch.

Note: Because, by default, runbook tests run against a live environment, you might want to
consider creating a dedicated test subscription or an on-premises Hybrid Runbook Workers
group. When you have the final version of a runbook, you can then export it, and import it into
your production subscription.

To publish a runbook that you validated through testing, in the Azure portal, click Publish in the toolbar
of the graphical or textual editor blade. Once you publish a runbook, you can link it to one or more
schedules, with different recurrence settings (one time, hourly, and daily) and an expiration date. You have
the option of enabling or disabling individual schedules without affecting others linked to the same
runbook. You can also modify any of the runbook input parameters and run settings. By default, runbooks
run in Azure, but if you deploy a Hybrid Runbook Worker group, you can run them on-premises. You also
have the option to execute a published runbook on demand. You can do this by clicking Start in the
toolbar of the runbook blade in the Azure portal.

Regardless of the method, invoking execution of a runbook creates an automation job. A runbook job
represents a single execution of a runbook. You can run multiple instances of the same runbook
simultaneously or according to overlapping schedules.

Monitoring and troubleshooting Automation jobs


You can control and monitor each automation job
by using its blade in the Azure portal. The
interface provides you with the ability to stop,
resume, and suspend the job’s execution,
depending on its current status. From here, you
also have access to job summary information, the
job’s output, and errors, warnings, exceptions, and
logs that it generates. Alternatively, you can
retrieve job status information by using the
Get-AzureAutomationJob PowerShell cmdlet.
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 11-31

When monitoring and troubleshooting jobs, you should be familiar with their possible states, which
include:

• Completed. Designates successful completion of the job.

• Failed. For PowerShell workflow–based runbooks, which includes all graphical runbooks, this indicates
a compilation failure. For PowerShell script–based runbooks, this typically is a result of an exception in
the script execution.

• Failed, waiting for resources. Implies that the job has failed because it has reached the limit of three
consecutive restarts following the Fair Share–based unload.

Note: The previous lesson in this module described the Fair Share mechanism.

• Queued. Designates the state of waiting for resources necessary to initiate job execution.

• Starting. Follows the Queued state, once the platform has assigned necessary resources to the job.

• Running. Designates the job actively performing activities included in the runbook.

• Running, waiting for resources. Indicates that the job has been unloaded because it reached the Fair
Share limit by running for three hours. The job will resume from the most recent checkpoint.
• Stopped. Indicates that a stop request by the owner of the Automation account has stopped the job
prior to its completion.
• Stopping. Describes a job in the process of stopping prior to its completion, following the stop
request by an administrative user with sufficient permissions to the Automation account.

• Suspended. Results from the request to suspend the job. Such request can be initiated by an
administrative user with sufficient permissions to the Automation account, by the Azure platform (in
case of an exception), or by a command in the runbook.

• Suspending. Indicates that the platform is attempting to suspend the job following a request from an
administrative user with sufficient permissions to the Automation account. Note that the job will have
to reach its next checkpoint, or complete if a checkpoint does not exist, before it changes its status to
Suspended.

• Resuming. Follows the Suspended state and is typically a result of an administrative action.

Protecting the Automation environment


It is important to consider protecting your
Automation configuration beyond the built-in
resiliency mechanisms of the Azure platform. By
default, Azure geo-replicates the content of each
Automation account to a secondary region,
automatically paired up with the primary region of
your choice. Both regions reside in the same
geopolitical area. The secondary replica becomes
available in case of a disaster affecting the region
that hosts the primary.
MCT USE ONLY. STUDENT USE PROHIBITED
11-32 Using Microsoft Azure-based management, monitoring, and automation

Azure also offers a 90-day default data retention period, which designates the length of time during
which you can view and audit past jobs. This period also determines the time after which the platform
permanently removes administratively deleted automation objects, such as accounts, assets, modules,
runbooks, or DSC components.
If these built-in provisions do not satisfy your requirements, you have the option of backing up your
Automation environment by using the following methods:

• Export runbooks from the Azure portal or by running the


Get-AzureAutomationRunbookDefinition PowerShell cmdlet.

• Maintain integration modules outside of an Automation account, because it is not possible to export
them.

• Extract and store definitions of unencrypted assets by running Azure PowerShell cmdlets,
because assets also are not exportable. To retrieve encrypted values of Automation variable and
credential assets, use the equivalent Automation activities (Get-AutomationVariable and
Get-AutomationPSCredential).

• Export DSC configurations by using the Azure portal and


Export-AzureRmAutomationDscConfiguration.

Demonstration: Testing, publishing, executing, and monitoring execution


of an Automation runbook
In this demonstration, you will see how to:

• Test a runbook.

• Publish a runbook.

• Execute a runbook and monitor the corresponding job.

Check Your Knowledge


Question

What actions are available for a runbook in the New authoring status?

Select the correct answer.

Testing

Scheduling

Creating a webhook

Reverting to the published version

Editing
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 11-33

Lab: Implementing Automation


Scenario
Adatum Corporation wishes to minimize administrative overhead as much as possible, especially for tasks
that involve management of VMs. For this reason, as part of Adatum’s evaluation of Microsoft Azure, you
have been asked to configure an Automation account and use its features to automate the most common
VM management tasks.

Objectives
After completing this lab, you will be able to:

• Configure Automation accounts.

• Create runbooks.

Note: The lab steps for this course change frequently due to updates to Microsoft Azure.
Microsoft Learning updates the lab steps frequently, so they are not available in this manual. Your
instructor will provide you with the lab documentation.

Lab Setup
Estimated Time: 40 minutes

Virtual machine: 20533E-MIA-CL1

User name: Student

Password: Pa55w.rd

Before starting this lab, ensure that you have performed the “Preparing the lab environment”
demonstration tasks at the beginning of the first lesson in this module, and that the setup script has
completed.

Question: What should you consider when testing the execution of an Automation
runbook?
Question: Why did you have to create an Automation Run As account in the lab?
MCT USE ONLY. STUDENT USE PROHIBITED
11-34 Using Microsoft Azure-based management, monitoring, and automation

Module Review and Takeaways


Review Question

Question: What are the potential benefits and challenges of running PowerShell workflows
from an on-premises computer as compared to running them as Automation runbooks?
MCT USE ONLY. STUDENT USE PROHIBITED
Implementing Microsoft Azure Infrastructure Solutions 11-35

Course Evaluation
Your evaluation of this course will help Microsoft
understand the quality of your learning experience.

Please work with your training provider to access


the course evaluation form.

Microsoft will keep your answers to this survey


private and confidential and will use your
responses to improve your future learning
experience. Your open and honest feedback is
valuable and appreciated.
MCT USE ONLY. STUDENT USE PROHIBITED

You might also like