You are on page 1of 20

Acropolis Hypervisor Administration Guide

Acropolis 4.5
06-Apr-2016

Notice
Copyright
Copyright 2016 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marks
and names mentioned herein may be trademarks of their respective companies.

License
The provision of this software to you does not grant any licenses or other rights under any Microsoft
patents with respect to anything other than the file server implementation portion of the binaries for this
software, including no licenses or any other rights in any hardware or any devices or software that are used
to communicate with or in connection with this software.

Conventions
Convention

Description

variable_value

The action depends on a value that is unique to your environment.

ncli> command

The commands are executed in the Nutanix nCLI.

user@host$ command

The commands are executed as a non-privileged user (such as nutanix)


in the system shell.

root@host# command

The commands are executed as the root user in the vSphere or Acropolis
host shell.

> command

The commands are executed in the Hyper-V host shell.

output

The information is displayed as output from a command or in a log file.

Default Cluster Credentials


Interface

Target

Username

Password

Nutanix web console

Nutanix Controller VM

admin

admin

vSphere Web Client

ESXi host

root

nutanix/4u

Copyright | Acropolis Hypervisor Administration Guide | AHV | 2

Interface

Target

Username

Password

vSphere client

ESXi host

root

nutanix/4u

SSH client or console

ESXi host

root

nutanix/4u

SSH client or console

AHV host

root

nutanix/4u

SSH client or console

Hyper-V host

Administrator

nutanix/4u

SSH client

Nutanix Controller VM

nutanix

nutanix/4u

Version
Last modified: April 6, 2016 (2016-04-06 15:34:50 GMT-7)

Copyright | Acropolis Hypervisor Administration Guide | AHV | 3

Contents
1: Node Management...................................................................................5
Controller VM Access.......................................................................................................................... 5
Shutting Down a Node in a Cluster (AHV)......................................................................................... 5
Starting a Node in a Cluster (AHV).................................................................................................... 5
Changing CVM Memory Configuration (AHV).....................................................................................6
Changing the Acropolis Host Name.................................................................................................... 7
Changing the Acropolis Host Password..............................................................................................7
Upgrading the KVM Hypervisor to Use Acropolis Features................................................................ 8

2: Host Network Management.................................................................. 11

Prerequisites for Configuring Networking.......................................................................................... 11


Best Practices for Configuring Networking in an Acropolis Cluster.................................................. 11
Layer 2 Network Management with Open vSwitch........................................................................... 13
About Open vSwitch............................................................................................................... 13
Default Factory Configuration................................................................................................. 14
Viewing the Network Configuration.........................................................................................15
Creating an Open vSwitch Bridge.......................................................................................... 17
Configuring an Open vSwitch Bond with Desired Interfaces..................................................17
Virtual Network Segmentation with VLANs............................................................................ 18
Changing the IP Address of an Acropolis Host................................................................................ 20

1
Node Management
Controller VM Access
Most administrative functions of a Nutanix cluster can be performed through the web console or nCLI.
Nutanix recommends using these interfaces whenever possible and disabling Controller VM SSH access
with password or key authentication. Some functions, however, require logging on to a Controller VM
with SSH. Exercise caution whenever connecting directly to a Controller VM as the risk of causing cluster
issues is increased.
Warning: When you connect to a Controller VM with SSH, ensure that the SSH client does
not import or change any locale settings. The Nutanix software is not localized, and executing
commands with any locale other than en_US.UTF-8 can cause severe cluster issues.
To check the locale used in an SSH session, run /usr/bin/locale. If any environment variables
are set to anything other than en_US.UTF-8, reconnect with an SSH configuration that does not
import or change any locale settings.

Shutting Down a Node in a Cluster (AHV)


Before you begin: Shut down guest VMs that are running on the node, or move them to other nodes in
the cluster.
Caution: You can only shut down one node for each cluster. If the cluster would have more than
one node shut down, shut down the entire cluster.
1. If the Controller VM is running, log on to the Controller VM with SSH and shut it down.
nutanix@cvm$ cvm_shutdown -P now

2. Log on to the Acropolis host with SSH.


3. Shut down the host.
root@ahv# shutdown -h now

Starting a Node in a Cluster (AHV)


1. Log on to the Acropolis host with SSH.
2. Find the name of the Controller VM.
root@ahv# virsh list --all | grep CVM

Node Management | Acropolis Hypervisor Administration Guide | AHV | 5

Make a note of the Controller VM name in the second column.


3. Determine if the Controller VM is running.

If the Controller VM is off, a line similar to the following should be returned:


-

NTNX-12AM2K470031-D-CVM

shut off

Make a note of the Controller VM name in the second column.

If the Controller VM is on, a line similar to the following should be returned:


-

NTNX-12AM2K470031-D-CVM

running

4. If the Controller VM is shut off, start it.


root@ahv# virsh start cvm_name

Replace cvm_name with the name of the Controller VM that you found from the preceding command.
5. Log on to another Controller VM in the cluster with SSH.
6. Verify that all services are up on all Controller VMs.
nutanix@cvm$ cluster status

If the cluster is running properly, output similar to the following is displayed for each node in the cluster:
CVM: 10.1.64.60 Up

6606, 6607]

Zeus
Scavenger
SSLTerminator
Hyperint
Medusa
DynamicRingChanger
Pithos
Stargate
Cerebro
Chronos
Curator
Prism
CIM
AlertManager
Arithmos
SysStatCollector
Tunnel
ClusterHealth
Janus
NutanixGuestTools

UP
UP
UP
UP
UP
UP
UP
UP
UP
UP
UP
UP
UP
UP
UP
UP
UP
UP

[3704,
[4937,
[5034,
[5059,
[5534,
[5852,
[5877,
[5902,
[5930,
[5960,
[5987,
[6020,
[6045,
[6070,
[6107,
[6196,
[6263,
[6317,

3727,
4960,
5056,
5082,
5559,
5874,
5899,
5927,
5952,
6004,
6017,
6042,
6067,
6099,
6175,
6259,
6312,
6342,

3728,
4961,
5057,
5083,
5560,
5875,
5900,
5928,
5953,
6006,
6018,
6043,
6068,
6100,
6176,
6260,
6313]
6343,

3729,
4990]
5139]
5086,
5563,
5954]
5962]
6103,
6106]
6075]
6261]
6111,
6101]
6296]
6344]
6497]

3807, 3821]
5099, 5108]
5752]
6108]

6818]

6446, 6468, 6469, 6604, 6605,

UP [6365, 6444, 6445, 6584]


UP [6377, 6403, 6404]

Changing CVM Memory Configuration (AHV)


Before you begin: Perform these steps once for each Controller VM in the cluster if you need to change
the Controller VM memory allocation.
Warning: To avoid impacting cluster availability, shut down one Controller VM at a time. Wait until
cluster services are up before proceeding to the next Controller VM.
1. Log on to the Acropolis host with SSH.

Node Management | Acropolis Hypervisor Administration Guide | AHV | 6

2. Find the name of the Controller VM.


root@ahv# virsh list --all | grep CVM

Make a note of the Controller VM name in the second column.


3. Stop the Controller VM.
root@ahv# virsh shutdown cvm_name

Replace cvm_name with the name of the Controller VM that you found from the preceding command.
4. Increase the memory of the Controller VM, depending on your configuration settings for deduplication
and other advanced features.
root@ahv# virsh setmaxmem cvm_name --config --size ram_gbGiB
root@ahv# virsh setmem cvm_name --config --size ram_gbGiB

Replace cvm_name with the name of the Controller VM and ram_gb with the recommended amount
from the sizing guidelines.
5. Start the Controller VM.
root@ahv# virsh start cvm_name

6. Log on to the Controller VM.


Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.
7. Confirm that cluster services are running on the Controller VM.
nutanix@cvm$ cluster status | grep CVM:

Every Controller VM listed should be Up.

Changing the Acropolis Host Name


To change the name of an Acropolis host, do the following:
1. Log on to the Acropolis host with SSH.
2. Use a text editor such as vi to set the value of the HOSTNAME parameter in the /etc/sysconfig/
network file.
HOSTNAME=my_hostname

Replace my_hostname with the name that you want to assign to the host.
3. Use the text editor to replace the host name in the /etc/hostname file.
4. Restart the Acropolis host.

Changing the Acropolis Host Password


Tip: Although it is not required for the root user to have the same password on all hosts, doing so
makes cluster management and support much easier. If you do select a different password for one
or more hosts, make sure to note the password for each host.

Node Management | Acropolis Hypervisor Administration Guide | AHV | 7

Perform these steps on every Acropolis host in the cluster.


1. Log on to the Acropolis host with SSH.
2. Change the root password.
root@ahv# passwd root

3. Respond to the prompts, providing the current and new root password.
Changing password for root.
Old Password:
New password:
Retype new password:
Password changed.

Upgrading the KVM Hypervisor to Use Acropolis Features


Before you begin:
Note: If you are currently deploying NOS 4.1.x/4.1.1.x and later, and previously upgraded to an
Acropolis-compatible version of the KVM hypervisor (for example, version KVM-20150120):

Do not use the script or procedure described in this topic.


Upgrade to the latest available Nutanix version of the KVM hypervisor using the Upgrade
Software feature through the Prism web console. See Software and Firmware Upgrades in the
Web Console Guide for the upgrade instructions.

Use this procedure if you are currently using a legacy, non-Acropolis version of KVM and want to use the
Acropolis distributed VM management service features. The first generally-available Nutanix KVM version
with Acropolis is KVM-2015120; the Nutanix support portal always makes the latest version available.

How to Check Your Acropolis Hypervisor Version


Use this procedure

Result

Log in to the hypervisor host and type cat /etc/

For example, the following result indicates that you


are running an Acropolis-compatible hypervisor:
el6.nutanix.2015412. The minimum result for AHV
is el6.nutanix.20150120

Log in to the hypervisor host and type cat /etc/

For example, the following result indicates that your


are running an Acropolis-compatible hypervisor:
CentOS release 6.6 (Final). Any result that
returns CentOS 6.4 or previous is non-Acropolis
(that is, KVM).

Log in to the Prism web console

View the Hypervisor Summary on the home page.


If it shows a version of 20150120 or later, you are
running AHV.

nutanix-release

centos-release

Node Management | Acropolis Hypervisor Administration Guide | AHV | 8

Upgrading the KVM Hypervisor to Use Acropolis


Current NOS and KVM Version

Do This

NOS 3.5.5 and KVM CentOS 6.4

1. Upgrade KVM using the upgrade script.


2. Import existing VMs.

NOS 3.5.4.6 or earlier and KVM CentOS 6.4

1. Upgrade to NOS 3.5.5.


2. Upgrade KVM using the upgrade script.
3. Import existing VMs.

NOS 4.0.2/4.0.2.x and KVM CentOS 6.4

1. Upgrade KVM using the upgrade script.


2. Import existing VMs.

NOS 4.1 and KVM CentOS 6.4

1. Upgrade KVM using the upgrade script.


2. Import existing VMs.

Note:

See the Nutanix Support Portal for the latest information on Acropolis Upgrade Paths.
This procedure requires that you shut down any VMs running on the host and leave them off
until the hypervisor and the AOS upgrade is completed.
Do not run the upgrade script on the same Controller VM where you are upgrading the node's
hypervisor. You can run it from another Controller VM in the cluster.

1. Download the hypervisor upgrade bundle from the Nutanix support portal at the Downloads link.
You must copy this bundle to the Controller VM you are upgrading. This procedure assumes you copy it
to and extract it from the /home/nutanix directory.
2. Log on to the Controller VM of the hypervisor host to be upgraded to shut down each VM and shut
down the Controller VM.
a. Power off each VM, specified by vm_name, running on the host to be upgraded.
nutanix@cvm$ virsh shutdown vm_name

b. Shut down the Controller VM once all VMs are powered off.
nutanix@cvm$ sudo shutdown -h now

3. Log on to a different Controller VM in the cluster with SSH.


4. Copy the upgrade bundle you downloaded to /home/nutanix and extract the upgrade tar file.
nutanix@cvm$ tar -xzvf upgrade_kvm-el6.nutanix.version.tar.gz

Download and extract the upgrade tar file.


This step assumes that you are performing the procedures from /home/nutanix.
nutanix@cvm$ curl -O http://download.nutanix.com/hypervisor/kvm/upgrade_kvmel6.nutanix.20150120.tar.gz
nutanix@cvm$ tar -xzvf upgrade_kvm-el6.nutanix.20150120.tar.gz

Node Management | Acropolis Hypervisor Administration Guide | AHV | 9

Note: You can also download this package from the Nutanix support portal from the
Downloads link.
The file creates and extracts to the upgrade_kvm directory.
5. Change to the upgrade_kvm/bin directory and run the upgrade_kvm upgrade script, where host_ip is the
IP address of the hypervisor host to be upgraded (the host where you have just shutdown the Controller
VM in Step 2).
nutanix@cvm$ cd upgrade_kvm/bin
nutanix@cvm$ ./upgrade_kvm --host_ip host_ip

The Controller VM of the upgraded host restarts and messages similar to the following are displayed.
This message shows the first generally-available KVM version with Acropolis (KVM-2015120).
...
2014-11-07 09:11:50 INFO host_upgrade_helper.py:1733 Found kernel
version: version_number.el6.nutanix.20150120.x86_64
2014-11-07 09:11:50 INFO host_upgrade_helper.py:1588 Current hypervisor version:
el6.nutanix.20150120
2014-11-07 09:11:50 INFO upgrade_kvm:161 Running post-upgrade
2014-11-07 09:11:51 INFO host_upgrade_helper.py:1716 Found upgrade marker:
el6.nutanix.20150120
2014-11-07 09:11:52 INFO host_upgrade_helper.py:1733 Found kernel
version: version_number.el6.nutanix.20150120
2014-11-07 09:11:52 INFO host_upgrade_helper.py:2036 Removing old kernel
2014-11-07 09:12:00 INFO host_upgrade_helper.py:2048 Updating release marker
2014-11-07 09:12:00 INFO upgrade_kvm:165 Upgrade complete

6. Log on to the upgraded Controller VM and verify that cluster services have started by noting that all
services are listed as UP .
nutanix@cvm$ cluster status

7. Repeat these steps for all hosts in the cluster.


Note: You need to upgrade the hypervisor for every host in your cluster before upgrading the
AOS/NOS on your cluster.
After the hypervisor is upgraded, you can now import any existing powered-off VMs according to
procedures described in the Acropolis App Mobility Fabric Guide.

Node Management | Acropolis Hypervisor Administration Guide | AHV | 10

2
Host Network Management
Network management in an Acropolis cluster consists of the following tasks:

Configuring Layer 2 switching through Open vSwitch. When configuring Open vSwitch, you configure
bridges, bonds, and VLANs.
Optionally changing the IP address, netmask, and default gateway that were specified for the hosts
during the imaging process.

Prerequisites for Configuring Networking


Change the configuration from the factory default to the recommended configuration. See Default Factory
Configuration on page 14 and Best Practices for Configuring Networking in an Acropolis Cluster on
page 11.

Best Practices for Configuring Networking in an Acropolis Cluster


Nutanix recommends that you perform the following OVS configuration tasks from the Controller VM, as
described in this documentation:

Viewing the network configuration


Configuring an Open vSwitch bond with desired interfaces
Assigning the Controller VM to a VLAN

For performing other OVS configuration tasks, such as adding an interface to a bridge and configuring
LACP for the interfaces in an OVS bond, log on to the Acropolis hypervisor host, and then follow the
procedures described in the OVS documentation at http://openvswitch.org/.
Nutanix recommends that you configure the network as follows:

Recommended Network Configuration


Network Component

Best Practice

Open vSwitch

Do not modify the OpenFlow tables that are associated with the default
OVS bridge br0.

VLAN

Add the Controller VM and the Acropolis hypervisor to the same VLAN.
By default, the Controller VM and the hypervisor are assigned to VLAN
0, which effectively places them on the native VLAN configured on the
upstream physical switch.
Do not add any other device, including guest VMs, to the VLAN to which
the Controller VM and hypervisor host are assigned. Isolate guest VMs on
one or more separate VLANs.

Host Network Management | Acropolis Hypervisor Administration Guide | AHV | 11

Network Component

Best Practice

Virtual bridges

Do not delete or rename OVS bridge br0.


Do not modify the native Linux bridge virbr0.

OVS bonded port (bond0)

Aggregate the 10 GbE interfaces on the physical host to an OVS bond


on the default OVS bridge br0 and trunk these interfaces on the physical
switch.
By default, the 10 GbE interfaces in the OVS bond operate in the
recommended active-backup mode. LACP configurations are known to
work, but support might be limited.
Do not include the 1 GbE interfaces in the same bond as the 10 GbE
interfaces. Use the 1 GbE interfaces on a different OVS bridge.

1 GbE interfaces (physical


host)

Use the 1 GbE interfaces for guest VM traffic. If the 1 GbE ports are used
for guest VM connectivity, follow the hypervisor manufacturers switch port
and networking configuration guidelines.
To avoid loops, do not add the 1 GbE interfaces to bridge br0, either
individually or in a second bond. Use them on other bridges.

IPMI port on the hypervisor


host

Do not trunk switch ports that connect to the IPMI interface. Configure the
switch ports as access ports for management simplicity.

Upstream physical switch

Nutanix does not recommend the use of Fabric Extenders (FEX)


or similar technologies for production use cases. While initial, lowload implementations might run smoothly with such technologies,
poor performance, VM lockups, and other issues might occur as
implementations scale upward (see Knowledge Base article KB1612).
Nutanix recommends the use of 10Gbps, line-rate, non-blocking switches
with larger buffers for production workloads.
Use an 802.3-2012 standardscompliant switch that has a low-latency,
cut-through design and provides predictable, consistent traffic latency
regardless of packet size, traffic pattern, or the features enabled on
the 10 GbE interfaces. Port-to-port latency should be no higher than 2
microseconds.
Use fast-convergence technologies (such as Cisco PortFast) on switch
ports that are connected to the hypervisor host.
Avoid using shared buffers for the 10 GbE ports. Use a dedicated buffer for
each port.

Physical Network Layout

Use redundant top-of-rack switches in a traditional leaf-spine architecture.


This simple, flat network design is well suited for a highly distributed,
shared-nothing compute and storage architecture.
Add all the nodes that belong to a given cluster to the same Layer-2
network segment.
Other network layouts are supported as long as all other Nutanix
recommendations are followed.

Controller VM

Do not remove the Controller VM from either the OVS bridge br0 or the
native Linux bridge virbr0.

This diagram shows the recommended network configuration for an Acropolis cluster:
Host Network Management | Acropolis Hypervisor Administration Guide | AHV | 12

Figure: Recommended Acropolis Network Configuration

Layer 2 Network Management with Open vSwitch


The Acropolis hypervisor uses Open vSwitch to connect the Controller VM, the hypervisor, and the guest
VMs to each other and to the physical network. The OVS package is installed by default on each Acropolis
node and the OVS services start automatically when you start a node.
To configure virtual networking in an Acropolis cluster, you need to be familiar with OVS. This
documentation gives you a brief overview of OVS and the networking components that you need to
configure to enable the hypervisor, Controller VM, and guest VMs to connect to each other and to the
physical network.

About Open vSwitch


Open vSwitch (OVS) is an open-source software switch implemented in the Linux kernel and designed to
work in a multiserver virtualization environment. By default, OVS behaves like a Layer 2 learning switch
that maintains a MAC address learning table. The hypervisor host and VMs connect to virtual ports on the
switch. Nutanix uses the OpenFlow protocol to configure and communicate with Open vSwitch.
Each hypervisor hosts an OVS instance, and all OVS instances combine to form a single switch. As an
example, the following diagram shows OVS instances running on two hypervisor hosts.

Host Network Management | Acropolis Hypervisor Administration Guide | AHV | 13

Figure: Open vSwitch

Default Factory Configuration


The factory configuration of an Acropolis host includes a default OVS bridge named br0 and a native linux
bridge called virbr0.
Bridge br0 includes the following ports by default:

An internal port with the same name as the default bridge; that is, an internal port named br0. This is the
access port for the hypervisor host.
A bonded port named bond0. The bonded port aggregates all the physical interfaces available on the
node. For example, if the node has two 10 GbE interfaces and two 1 GbE interfaces, all four interfaces
are aggregated on bond0. This configuration is necessary for Foundation to successfully image the
node regardless of which interfaces are connected to the network.
Note: Before you begin configuring a virtual network on a node, you must disassociate the
1 GbE interfaces from the bond0 port. See Configuring an Open vSwitch Bond with Desired
Interfaces on page 17.

The following diagram illustrates the default factory configuration of OVS on an Acropolis node:

Host Network Management | Acropolis Hypervisor Administration Guide | AHV | 14

Figure: Default factory configuration of Open vSwitch in the Acropolis hypervisor

The Controller VM has two network interfaces. As shown in the diagram, one network interface connects to
bridge br0. The other network interface connects to a port on virbr0. The Controller VM uses this bridge to
communicate with the hypervisor host.

Viewing the Network Configuration


Use the following commands to view the configuration of the network elements.
Before you begin: Log on to the Acropolis host with SSH.

To show interface properties such as link speed and status, log on to the Controller VM, and then list the
physical interfaces.
nutanix@cvm$ manage_ovs show_interfaces

Output similar to the following is displayed:


name
eth0
eth1
eth2
eth3

mode link speed


1000 True 1000
1000 True 1000
10000 True 10000
10000 True 10000

To show the ports and interfaces that are configured as uplinks, log on to the Controller VM, and then
list the uplink configuration.
nutanix@cvm$ manage_ovs --bridge_name bridge show_uplinks

Replace bridge with the name of the bridge for which you want to view uplink information. Omit the -bridge_name parameter if you want to view uplink information for the default OVS bridge br0.

Host Network Management | Acropolis Hypervisor Administration Guide | AHV | 15

Output similar to the following is displayed:


Uplink ports: bond0
Uplink ifaces: eth1 eth0

To show the virtual switching configuration, log on to the Acropolis host with SSH, and then list the
configuration of Open vSwitch.
root@ahv# ovs-vsctl show

Output similar to the following is displayed:


59ce3252-f3c1-4444-91d1-b5281b30cdba
Bridge "br0"
Port "br0"
Interface "br0"
type: internal
Port "vnet0"
Interface "vnet0"
Port "br0-arp"
Interface "br0-arp"
type: vxlan
options: {key="1", remote_ip="192.168.5.2"}
Port "bond0"
Interface "eth3"
Interface "eth2"
Port "bond1"
Interface "eth1"
Interface "eth0"
Port "br0-dhcp"
Interface "br0-dhcp"
type: vxlan
options: {key="1", remote_ip="192.0.2.131"}
ovs_version: "2.3.1"

To show the configuration of an OVS bond, log on to the Acropolis host with SSH, and then list the
configuration of the bond.
root@ahv# ovs-appctl bond/show bond_name

For example, show the configuration of bond0.


root@ahv# ovs-appctl bond/show bond0

Output similar to the following is displayed:


---- bond0 ---bond_mode: active-backup
bond may use recirculation: no, Recirc-ID : -1
bond-hash-basis: 0
updelay: 0 ms
downdelay: 0 ms
lacp_status: off
active slave mac: 0c:c4:7a:48:b2:68(eth0)
slave eth0: enabled
active slave
may_enable: true
slave eth1: disabled
may_enable: false

Host Network Management | Acropolis Hypervisor Administration Guide | AHV | 16

Creating an Open vSwitch Bridge


Nutanix recommends that you create at least one additional OVS bridge and isolate the user VMs on that
bridge.
To create an OVS bridge, do the following:
1. Log on to the Acropolis host with SSH.
2. Log on to the Controller VM.
root@host# ssh nutanix@192.168.5.254

Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.
3. Create an OVS bridge on each host in the cluster.
nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br bridge'

Replace bridge with a name for the bridge. The output does not indicate success explicitly, so you can
append && echo success to the command. If the bridge is created, the text success is displayed.
For example, create a bridge and name it br1.
nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1 && echo success'

Output similar to the following is displayed:


nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1 && echo success'
Executing ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1 && echo success on the
cluster
================== 192.0.2.203 =================
FIPS mode initialized
Nutanix KVM
success
...

Configuring an Open vSwitch Bond with Desired Interfaces


When creating an OVS bond, you can specify the interfaces that you want to include in the bond.
Use this procedure to create a bond that includes a desired set of interfaces or to specify a new set of
interfaces for an existing bond. If you are modifying an existing bond, the Acropolis hypervisor removes the
bond and then re-creates the bond with the specified interfaces.
Note: Perform this procedure on factory-configured nodes to remove the 1 GbE interfaces from
the bonded port bond0. You cannot configure failover priority for the interfaces in an OVS bond, so
the disassociation is necessary to help prevent any unpredictable performance issues that might
result from a 10 GbE interface failing over to a 1 GbE interface. Nutanix recommends that you
aggregate only the 10 GbE interfaces on bond0 and use the 1 GbE interfaces on a separate OVS
bridge.
To create an OVS bond with the desired interfaces, do the following:
1. Log on to the Acropolis host with SSH.
2. Log on to the Controller VM.
root@host# ssh nutanix@192.168.5.254

Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.

Host Network Management | Acropolis Hypervisor Administration Guide | AHV | 17

3. Create a bond with the desired set of interfaces.


nutanix@cvm$ manage_ovs --bridge_name bridge --interfaces interfaces update_uplinks -bond_name bond_name

Replace bridge with the name of the bridge on which you want to create the bond. Omit the -bridge_name parameter if you want to create the bond on the default OVS bridge br0.
Replace bond_name with a name for the bond. The default value of --bond_name is bond0.
Replace interfaces with one of the following values:

A comma-separated list of the interfaces that you want to include in the bond. For example,
eth0,eth1 .
A keyword that indicates which interfaces you want to include. Possible keywords:

10g. Include all available 10 GbE interfaces


1g. Include all available 1 GbE interfaces
all. Include all available interfaces

For example, create a bond with interfaces eth0 and eth1.


nutanix@cvm$ manage_ovs --interfaces eth0,eth1 update_uplinks

Example output similar to the following is displayed:


2015-03-05
2015-03-05
2015-03-05
2015-03-05

11:17:17
11:17:17
11:17:18
11:17:22

WARNING manage_ovs:291 Interface eth1 does not have link state


INFO manage_ovs:325 Deleting OVS ports: bond0
INFO manage_ovs:333 Adding bonded OVS ports: eth0 eth1
INFO manage_ovs:364 Sending gratuitous ARPs for 192.0.2.21

Virtual Network Segmentation with VLANs


You can set up a segmented virtual network on an Acropolis node by assigning the ports on Open vSwitch
bridges to different VLANs. VLAN port assignments are configured from the Controller VM that runs on
each node.
For best practices associated with VLAN assignments, see Best Practices for Configuring Networking in
an Acropolis Cluster on page 11. For information about assigning guest VMs to a VLAN, see the Web
Console Guide.
Assigning an Acropolis Host to a VLAN
To assign an AHV host to a VLAN, do the following on every AHV host in the cluster:
1. Log on to the Acropolis host with SSH.
2. Assign port br0 (the internal port on the default OVS bridge, br0) to the VLAN that you want the host be
on.
root@ahv# ovs-vsctl set port br0 tag=host_vlan_tag

Replace host_vlan_tag with the VLAN tag for hosts.


3. Confirm VLAN tagging on port br0.
root@ahv# ovs-vsctl list port br0

4. Check the value of the tag parameter that is shown.


5. Verify connectivity to the IP address of the AHV host by performing a ping test.

Host Network Management | Acropolis Hypervisor Administration Guide | AHV | 18

Assigning the Controller VM to a VLAN


By default, the public interface of a Controller VM is assigned to VLAN 0. To assign the Controller VM to
a different VLAN, change the VLAN ID of its public interface. After the change, you can access the public
interface from a device that is on the new VLAN.
Note: To avoid losing connectivity to the Controller VM, do not change the VLAN ID when you are
logged on to the Controller VM through its public interface. To change the VLAN ID, log on to the
internal interface that has IP address 192.168.5.254.
Perform these steps on every Controller VM in the cluster. To assign the Controller VM to a VLAN, do the
following:
1. Log on to the Acropolis host with SSH.
2. Log on to the Controller VM.
root@host# ssh nutanix@192.168.5.254

Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.
3. Assign the public interface of the Controller VM to a VLAN.
nutanix@cvm$ change_cvm_vlan vlan_id

Replace vlan_id with the ID of the VLAN to which you want to assign the Controller VM.
For example, add the Controller VM to VLAN 10.
nutanix@cvm$ change_cvm_vlan 10

Output similar to the following us displayed:


Replacing external NIC in CVM, old XML:
<interface type="bridge">
<mac address="52:54:00:02:23:48" />
<source bridge="br0" />
<vlan>
<tag id="10" />
</vlan>
<virtualport type="openvswitch">
<parameters interfaceid="95ce24f9-fb89-4760-98c5-01217305060d" />
</virtualport>
<target dev="vnet0" />
<model type="virtio" />
<alias name="net2" />
<address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci" />
</interface>
new XML:
<interface type="bridge">
<mac address="52:54:00:02:23:48" />
<model type="virtio" />
<address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci" />
<source bridge="br0" />
<virtualport type="openvswitch" />
</interface>
CVM external NIC successfully updated.

Host Network Management | Acropolis Hypervisor Administration Guide | AHV | 19

Changing the IP Address of an Acropolis Host


To change the IP address of an Acropolis host, do the following:
1. Edit the settings of port br0, which is the internal port on the default bridge br0.
a. Log on to the host console as root.
You can access the hypervisor host console either through IPMI or by attaching a keyboard and
monitor to the node.
b. Open the network interface configuration file for port br0 in a text editor.
root@ahv# vi /etc/sysconfig/network-scripts/ifcfg-br0

c. Update entries for host IP address, netmask, and gateway.


The block of configuration information that includes these entries is similar to the following:
ONBOOT="yes"
NM_CONTROLLED="no"
PERSISTENT_DHCLIENT=1
NETMASK="subnet_mask"
IPADDR="host_ip_addr"
DEVICE="br0"
TYPE="ethernet"
GATEWAY="gateway_ip_addr"
BOOTPROTO="none"

Replace host_ip_addr with the IP address for the hypervisor host.


Replace subnet_mask with the subnet mask for host_ip_addr.
Replace gateway_ip_addr with the gateway address for host_ip_addr.

d. Save your changes.


e. Restart network services.
/etc/init.d/network restart

2. Log on to the Controller VM and restart genesis.


nutanix@cvm$ genesis restart

If the restart is successful, output similar to the following is displayed:


Stopping Genesis pids [1933, 30217, 30218, 30219, 30241]
Genesis started on pids [30378, 30379, 30380, 30381, 30403]

For information about how to log on to a Controller VM, see Controller VM Access on page 5.
3. Assign the host to a VLAN. For information about how to add a host to a VLAN, see Assigning an
Acropolis Host to a VLAN on page 18.

Host Network Management | Acropolis Hypervisor Administration Guide | AHV | 20

You might also like