You are on page 1of 53

Is HA dependent on virtualcenter (Only for Install)

What is the Maximum Host Failure allowed in a cluster (4)


How does HA know to restart a VM from a dropped Host (storage lock will be remov
ed from the metadata)
How many iSCSI targets will ESX support 8 for 3.01, (64 for 3.5)
How Many FiberChannel targets(256) (128 on Install)
What is Vmotion(ability to move running vm from one host to another)
Ask what is the different when you use viclient connect to VC and directly to ES
X server itself.
When you connect to VC you manage ESX server via vpxa (Agent on esx server). Vpx
a then pass those request to hostd (management service on esx server).
When you connect to ESX server directly, you connect to hostd (bypass vpxa). You
can extend this to a trobleshoot case,
where connect to esx see one thing and connect to VC see another.
So the problem is most likely out of sync between hostd and vpxa, "service vmwar
e-vpxa restart" should take care of it.
the default partitions in vmware --/ (5 gb),/boot(100mb),/swap(544 mb) /vmkcore
(100) ,/vmfs
Types of licensing : starter and standard
Starter : limited production oriented features ( Local and NAS storage only)
Standard ; production oriented features and all the add-on licenses can be confi
gured with this edition.
Unlimited max number of vms,san,iscsi,nas, vsmp support and VCB addon at additio
nal cost.
What is ha and drs
what is the ports used by license server
What is ports used by virtual center
Waht is the user roles in virtual center
how do you check the corrupted storage information
how many vm's you can use in Virtual center
how many esx hosts you can connect in Virtual center
Vmware service console manages firewall, snmp agent, apache tomcat other service
s like HA & DRs
Vmware vitual machine files are vmdk, vmx (configuration), nvram (BIOS file), Lo

g file
The eSx server hypervisor is known as vmkernal.
Hypervisor ESX server offers basic partition of server resources, however it als
o acts as the foundation for virtual infrastructure software enabling Vmotion,DR
S and so forth kerys to the dynamic, automated datacenter.
host agent on each managed host software collects , communicates, and executes t
he action recieved through the VI client. it is installed as a part of the ESX S
erver Installation.
Virtual Center agent : On each managed host, software that collects commmunicate
s and executes the actions received from the virtual center server. Ther virtual
center agent is installed the firsttime any host is added to the virtual center
inventory.
ESX server installation requirements 1500 mhz cpu intel or amd, memory 1 gb mini
mum up to 256 mb, 4 gb hard drive space,
The configuration file that manages the mapping of service console file system t
o mount point is /etc/fstab
---ESX mount points at the time of installaiton
/boot 100 mb ext3
/
5 gb ext3
swap 544mb
/va/log 2 gb
/vmfs/volumes as required vmfs-3
vmkcore 100 mb vmkcore
-------------/cciss/c0d0 consider as local SCSI storage
/dev/sda is storage network based lun
---VI client provides direct access to an ESX server for configuration and virtual
machine management
The VI Client is also used to access Vitual center to provide management configu
ration and monitoring of all ESX servers and their virtual machines within the v
irtual infractur environment. However when using the VI client to connect direct
ly to the ESX server, no management of Virtual center feature is possible.
EG; you cannot cofigure and administer, Vmware DRS or Vmware HA.
---Vmware license mode : default 60 days trail. after 60 days you can create VM's b
ut you cannot power on VM's
--The license types are Foundation, Standard and Enterprise

Fondation license : VMFS,Virtual SMP, Virtual center agent, Vmware update manage
r, VCB
VI standard license : Foundation license + HA feature
Enterprise : Foundation license + STandard license + Vmotion, VM Storage vmotion
, and VMware DRS
--------Virtual machines time sync with ESX host
-By default the first service console network connection is always named service
console. It always in Vswitch 0. The switch always connects vmnic 0
-------To gather vmware diagnostics information run the script vm-spport from the servi
ce console
If you generate the diagnostic information, this information will be stored in V
mware-virtualcenter-support-date@time folder
The folder contains Viclient-support, which will holds vi client log files
Another file is esx-support-date@time.tgz, which is compressed archive file cont
ains esx server diagnostics information
---The virtual switch work at layer 2 of the OSI model.
You cannot have two virtual switchs mapped to one physical NIC
You can map two or more physical NIC mapped to one virtual switch.
A swithc used by VMkernal for accessing ISCSI or NAS based storage
Virtual switch used to give the service console access to a management LaN
Vitual swich can have 1016 ports adn 8 ports used for management purpose total i
s 1024
Virtual switch defalt ports are 56
ESX server can have 4096 ports max
Maximum 4 virtual NIC per VM
ESX server can have 32 NIC on Intel NIC, 20 Broadcom NIC's
---Three type of network connections
Service console port : access to ESX server management network
Vmkernal port : Access to vmotion,ISCSI,NFS/NAS networks
Virtual machine port group : Access to VM networks

Service console port requires network lable,VLAN ID optional, Static ip or DHCP


Multiple service console connections can be created only if they are configured
on different network. In addition only a single service console gateway, IP addr
ess can be defined
--------------A VMkernal port allow to use ISCSI, NAS based networks. Vmkernal port is requied
for Vmotion.
It requires network lablel, vlan id optional, IP setting
Multiple Vmkernal connections can be configured only if they are configured on a
different networks,only single vmkernal gateway Ip address can be defined.
---------Virtual machine port group required
A network lable
VLAN id optional
------to list physical nics
esxcfg -nics -l
------------Three network policies are available for the vswitch
Security
Traffic shaping
NIC teaming
----------Network security policy mode
Promiscos mode : when set to reject, placing a guest adapter in promiscuous mode
has no which frames are received by the adapter
Mac address changer : when set to reject, if the guest attempts to change the MA
C adress assigned to the virtual NIC, it stops receiving frames
Forged trasmitts - When set to reject drops any frames that the guest sends, whe
re the source address field contains a MAC address other than the assigned virtu
al NIC mac addresss ( default accept)
--------32 ESX hosts can user a singel shared storage
------vmhba0:0:11:3
adapter 0 : target id : LUN: partition

----------SNmp incomming port 161


ISCSI client

out going port 162

outgoing port 3260

Virtual center agent 902


ntp client
VCB

123 port

443, 902 ports

----------the defaul ISCSI storage adapter is vmhba32


ISCSI follow Iqn naming convention
------------ISCSI uses CHAP authuntication
--Vmware license port is 27000
---After changing made at command line for reflecting the changes you need to start
the hostd daemon
service vmware-mgmt restart
---------------View the iSCSI name assigned to the iSCSI software adapter:
vmkiscsi-tool -I -1 vmhba40
View the iSCSI alias assigned to the iSCSI software adapter:
vmkiscsi-tool -k -1 vmhba40
---Login to the service console as root and execute e sxc f g - vmhbadevs to identi
fy which
LUNs are currently seen by the ESX server.
# esxcfg-vmhbadevs
Run the esxcf g-vmhbadevs command with the -m option to map VMFS names to
VMFS UUIDs. Note that the LUN partition numbers are shown in this output. The
hexidecimal values are described later.
# esxcfg-vmhbadevs -m
------------Use the vdf -h comand to identify disk statistics (Size, Used, Avail, Use%, Moun
ted
on) for all file system volumes recognized by your ESX host.
List the contents of the /vmfs/volumes directory. The hexidecimal numbers (in da
rk blue) are

unique VMFS names. The names in light blue are the VMFS labels. The labels are
symbolically linked to the VMFS volumes.
ls -l \vmfs\volumes
----------Using the Linux device name (obtained using e sxc f g - vmhbadevs command), chec
k
LUNs A, B and C to see if any are partitioned.
If there is no partition table, example a. below, go to step 3. If there is a ta
ble, example b. go
to step 2.
# fdisk -1 /dev/sd<?>
-----------1. Format a partitioned LUN using vmkf s tool s . Use the - C and - S options re
spectively,
to create and label the volume. Using the command below, create a VMFS volume on
LUN A.
Ask your instructor if you should use a custom VMFS label name.
# vmkfstools -C vmfs3 -S LUNc#> vmhbal:O:#:l
---------------Now that the LUN has been partitioned and formatted as a VMFS volume, it can be
used as a
datastore. Your ESX host recognizes these new volumes.
vdf -h
--------------------Use the esxcf g-vmhbadevs command with the -m option to map the VMFS hex
names to SAN LUNs.
# esxcfg-vmhbadevs -m
-------------It may be helpful to change the label to identify that this VMFS volume is spann
ed.
Add - spanned to the VMFS label name.
# In -sf /vmfs/volumes/<V~~S-UUID> /vmfs/volumes/
<New- L abel- N ame>
------------In order to remove a span, you must reformat LUN B with a new VMFS volume (becau
se it
was the LUN that was spanned to).
THIS WILL DELETE ALL DATA ON BOTH LUNS IN THE SPAN !
# vmkfstools -C vmfs3 -S <label> vmhbal:O:#:l
-----------------------Enable the ntpclient service on the Service Console
# esxcfg-firewall -e ntpclient
-----------Determine if the NTP daemon starts when the system boots.

# chkconfig --list ntpd


-----Configure the system to synchronize the hwclock and the operating system clocks
each time
the ESX Server host is rebooted.
# nano -w /etc/sysconfig/clock
UTC= t rue
-----------List the available services.
# esxcfg-firewall -s
------------Communication between VI client and ESX server the ports reqired 902, 903
Communication between VI client and virtual center 902
Communication between VI web access client and ESX 80, 443
Communication between VI client and virtual center 80, 443
Communication between ESX server and License server 27010 (in), 27000(out)
ESX server in a vmware HA cluster 2050 -5000 (in), 8042-8045 (out)
ESX serever during VMotion 8000
The required port for ISCSI 3260, NFS : 2049
Update manager SOAP port - 8086
Update manager Web port - 9084
Vmware converter SOAP port 9085
Vmware converter Web port 9084
------------vcbMounter is used, among other things, to create the snapshot for the 3rd party
backup software to access:
vcbMounter -h <VC - IP - address - or - hostname>
-u <VC- u ser- a ccount>
-p cVC user password>
-a ~~dzntifi-eo rf - t he- V M -t o- b ackup>
-r <Directory - on - VCB Proxy - to - putbackup>
-t <Backup - type: - file - or - fullvm>
-----------------List the different ways to identify your virtual machine. To do this, use the
vcbVmName command:
vcbVmName
-h < V i r t u a l Center - Server-IP-Address-or-Hostname>
-u < V i r t u a l c e n t e r- S e r v e r- u s e r- a ccount>
-p < V i r t u a l c e n t e r- S e r v e r- p assword>
-s ipaddr:<IP - address - of - virtual-machine - to - backup>

------------------------Unmount the virtual disk(s):


mountvm -u c:\backups\tempmnt
----------------VMFS Volume can be created one partition 256 GB in the maimum size of a VM
For a LUN 32extents can be added up to 64 TB
8 mount points for NFS are the maximum
----------------Service console will use 272 mb
---------------The files for vmware virtual machine
vmname.vmx --virtual machine configuration file
vmname.vmdk -- actual virtual hard drive for the virtual guest operation system
vmname_flot.vmdk--preallocated space
vmname.log --virtual machine log file
vmname.vswap -- vm swap file
vmname.vmsd ---vm snapshot file
Log files should be used only when you are having trouble with a virtual machine
.
VMDK files
VMDK files are the actual virtual hard drive for the virtual guest op
eration system (virtual machine / VM). You can create either dynamic or fixed vi
rtual disks. With dynamic disks, the disks start small and grow as the disk insi
de the guest OS grows. With fixed disks, the virtual disk and guest OS disk star
t out at the same (large) disk. For more information on monolithic vs. split dis
ks see this comparison from sanbarrow.com.
VMEM A VMEM file is a backup of the virtual machine s paging file. It will only ap
pear if the virtual machine is running, or if it has crashed.
VMSN & VMSD files these files are used for VMware snapshots. A VMSN file is used
to store the exact state of the virtual machine when the snapshot was taken. Us
ing this snapshot, you can then restore your machine to the same state as when t
he snapshot was taken. A VMSD file stores information about snapshots (metadata)
. You ll notice that the names of these files match the names of the snapshots.
NVRAM files
these files are the BIOS for the virtual machine. The VM must know h
ow many hard drives it has and other common BIOS settings. The NVRAM file is whe
re that BIOS information is stored.
VMX files a VMX file is the primary configuration file for a virtual machine. Wh
en you create a new virtual machine and answer questions about the operating sys
tem, disk sizes, and networking, those answers are stored in this file. As you c
an see from the screenshot below, a VMX file is actually a simple text file that

can be edited with Notepad. Here is the


directory listing, above:

Windows XP Professional.vmx

file from the

------------we can create VM


1. Vm from scratch
2.Deploy from templete
3. Cloned
4. P2V
5. Iso file
6..vmx file
----------Max CPU's per core is 4 to 8 vcpu's
----------------At the time of vomotion arp notification will be released
70 to 80 % will be copied to the other ESX host
a bit map file will be created, and uses will be working on the bitmap file
and the changes will be copied to the other ESX host
How vmotion works : -Live migration of a virtual machine from one physical server to
another with VMotion is enabled by three underlying technologies.
First, the entire state of a virtual machine is encapsulated by a set
of files stored on shared storage such as Fibre Channel or iSCSI
Storage Area Network (SAN) or Network Attached Storage (NAS).
VMware s clustered Virtual Machine File System (VMFS) allows
multiple installations of ESX Server to access the same virtual
machine files concurrently.
Second, the active memory and precise execution state of the
virtual machine is rapidly transferred over a high speed network,
allowing the virtual machine to instantaneously switch from
running on the source ESX Server to the destination ESX Server.
VMotion keeps the transfer period imperceptible to users by
keeping track of on-going memory transactions in a bitmap. Once
the entire memory and system state has been copied over to the
target ESX Server, VMotion suspends the source virtual machine,
copies the bitmap to the target ESX Server, and resumes the virtual
machine on the target ESX Server. This entire process takes less
than two seconds on a Gigabit Ethernet network.
Third, the networks being used by the virtual machine are also virtualized
by the underlying ESX Server, ensuring that even after the
migration, the virtual machine network identity and network connections
are preserved. VMotion manages the virtual MAC address
as part of the process. Once the destination machine is activated,
VMotion pings the network router to ensure that it is aware of the
new physical location of the virtual MAC address. Since the migration
of a virtual machine with VMotion preserves the precise execution
state, the network identity, and the active network connections,

the result is zero downtime and no disruption to users.


----------------------------------DRS
DRS will balance the workload across the resources you presented to the cluster.
It is an essential component of any successful ESX implementation.
With VMware ESX 3.x and VirtualCenter 2.x, it's possible to configure VirtualCen
ter to manage the access to the resources automatically, partially, or manually
by an administrator.
This option is particularly useful for setting an ESX server into maintenance mo
de. Maintenance mode is a good environment to perform tasks such as scanning for
new storage area network (SAN) disks, reconfiguring the host operating system's
networking or shutting down the server for maintenance. Since virtual machines
can't be run during maintenance mode, the virtual machines need to be relocated
to other host servers. Commonly, administrators will configure the ESX cluster t
o fully automate the rules for the DRS settings. This allows VirtualCenter to ta
ke action based on workload statistics, available resources and available host s
ervers
An important point to keep in mind is that DRS works in conjunction with any est
ablished resource pools defined in the VirtualCenter configuration. Poor resourc
e pool configuration (such as using unlimited options) can cause DRS to make unn
ecessary performance adjustments. If you truly need to use unlimited resources w
ithin a resource pool the best practice would be to isolate. Isolation requires
a separate ESX cluster with a limited number of ESX hosts that share a single re
source pool where the virtual machines that require unlimited resources are allo
wed to operate. Sharing unlimited setting resource pools with limited setting re
source pools within the same cluster could cause DRS to make unnecessary perform
ance adjustments. DRS can compensate for this scenario, but that could be by byp
assing any resource provisioning and planning previously established.
---------------------How VMotion works with DRS
The basic concept of VMotion is that ESX will move a virtual machine while it is
running to another ESX host with the move being transparent to the virtual mach
ine.
ESX requires a dedicated network interface at 1 GB per second or greater,
shared storage and a virtual machine that can be moved.
Not all virtual machines can be moved. Certain situations, such as optical image
binding to an image file, prevent a virtual machine from migrating. With VMotio
n enabled, an active virtual machine can be moved automatically or manually from
one ESX host to another. An automatic situation would be as described earlier w
hen a DRS cluster is configured for full automation. When the cluster goes into
maintenance mode, the virtual machines are moved to another ESX host by VMotion.
Should the DRS cluster be configured for all manual operations, the migration v
ia VMotion is approved within the Virtual Infrastructure Client, then VMotion pr
oceeds with the moves.
VMware ESX 3.5 introduces the highly anticipated Storage VMotion. Should your sh
ared storage need to be brought offline for maintenance,Storage VMotion can migr
ate an active virtual machine to another storage location. This migration will t
ake longer, as the geometry of the virtual machine's storage is copied to the ne

w storage location. Because this is not a storage solution, the traffic is manag
ed through the VMotion network interface.
Points to consider
One might assume that with the combined use of DRS and VMotion that all bases ar
e covered. Well, not entirely. There are a few considerations that you need to b
e aware of so that you know what DRS and VMotion can and cannot do for you.
VMotion does not give an absolute zero gap of connectivity during a migration. I
n my experiences the drop in connectivity via ping is usually limited to one pin
g from a client or a miniscule increase in ping time on the actual virtual machi
ne. Most situations will not notice the change and reconnect over the network du
ring a VMotion migration. There also is a slight increase in memory usage and on
larger virtual machines this may cause a warning light on RAM usage that usuall
y clears independently.
Some virtual machines may fail to migrate, whether by automatic VMotion task or
if evoked manually. This is generally caused by obsolete virtual machines, CD-RO
M binding or other reasons that may not be intuitive. In one migration failure I
experienced recently, the Virtual Infrastructure client did not provide any inf
ormation other than the operation timed out. The Virtual Center server had no in
formation related to the migration task in the local logs. In the database
Identification of your risks is the most important pre-implementation task you c
an do with DRS and VMotion. So what can you do to identify your risks? Here are
a couple of easy tasks:
Schedule VMotion for all systems to keep them moving across hosts.
Regularly put ESX hosts in and then exit maintenance mode.
Do not leave mounted CD-ROM media on virtual machines (datastore/ISO file or hos
t device options).
Keep virtual machines up to date with VMware tools and virtual machine versionin
g.
Monitor the VPX_EVENT table in your ESX database for the EVENT_TYPE = vim.event.
VmFailedMigrateEvent
All in all, DRS and VMotion are solid technologies. Anomalies can happen, and th
e risks should be identified and put into your regular monitoring for visibility
.
VMotion usage scenarios
Now that VMotion is enabled on two or more hosts, when should it be used? There
are two primary reasons to use VMotion: to balance the load on the physical ESX
servers and eliminate the need to take a service offline in order to perform mai
ntenance on the server.
VI3 balances its load by using a new feature called DRS. DRS is included in the
VI3 Enterprise edition along with VMotion. This is because DRS uses VMotion to b
alance the load of an ESX cluster in real time between all of the server involve
d in the cluster. For information on how to configure DRS see page 95 of the VMw
are VI3 Resource Management Guide. Once DRS is properly configured it will const
antly be evaluating how best to distribute the load of running VMs amongst all o
f the host servers involved in the DRS-enabled cluster. If DRS decides that a pa
rticular VM would be better suited to run on a different host then it will utili
ze VMotion to seamlessly migrate the VM over to the other host.
While DRS migrates VMs here and there with VMotion, it is also possible to migra
te all of the VMs off of one host server (resources permitting) and onto another
. This is accomplished by putting a server into "maintenance mode." When a serve

r is put into maintenance mode, VMotion will be used to migrate all of the runni
ng VMs off it onto another server. This way it is possible to bring the first se
rver offline to perform physical maintenance on it without impacting the service
s that it provides.
How VMotion works
As stated above, VMotion is the process that VMware has invented to migrate, or
move, a virtual machine that is powered on from one host server to another host
server without the VM incurring downtime. This is known as a "hot-migration." Ho
w does this hot-migration technology that VMware has dubbed VMotion work? Well,
as with everything, in a series of steps:
A request has been made that VM-A should be migrated (VMotioned) from ESX-A to E
SX-B
VM-A's memory is pre-copied from ESX-A to ESX-B while ongoing changes are writte
n to a memory bitmap on ESX-A.
VM-A is quiesced on ESX-A and VM-A's memory bitmap is copied to ESX-B.
VM-A is started on ESX-B and all access to VM-A is now directed to the copy runn
ing on ESX-B.
The rest of VM-A's memory is copied from ESX-A all the while memory is being rea
d and written from VM-A on ESX-A when applications attempt to access that memory
on VM-A on ESX-B.
If the migration is successful VM-A is unregistered on ESX-A.
-----------------------------------For a VMotion event to be successful the following must be true:
The VM cannot be connected to an internal vswitch.
The VM cannot be connected to a CD-ROM or floppy drive that is using an ISO or f
loppy image stored on a drive that is local to the host server.
The VM's affinity must not be set, i.e., binding it to physical CPU(s).
The VM must not be clustered with another VM (using a cluster service like the M
icrosoft Cluster Service (MSCS)).
The two ESX servers involved must both be using (the same!) shared storage.
The two ESX servers involved must be connected via Gigabit Ethernet (or better).
The two ESX servers involved must have access to the same physical networks.
The two ESX servers involved must not have virtual switch port groups that are l
abeled the same.
The two ESX servers involved must have compatible CPUs. (See support on Intel an
d AMD).
If any of the above conditions are not met, VMotion is not supported and will no
t start. The simplest way to test these conditions is to attempt a manual VMotio
n event. This is accomplished by right-clicking on VM in the VI3 client and clic
king on "Migrate..." The VI3 client will ask to which host this VM should be mig
rated. When a host is selected, several validation checks are performed. If any

of the above conditions are true then the VI3 client will halt the VMotion opera
tion with an error.
Conclusion
The intent of this article was to provide readers with a solid grasp of what VMo
tion is and how it can benefit them. If you have any outstanding questions with
regards to VMotion or any VMware technology please do not hesitate to send them
to me via ask the experts.
-----------------------------------------------------------------------What Is VMware VMotion?
VMware VMotion enables the live migration of running virtual machines from one phy
sical server to another
with zero downtime, continuous service availability, and complete transaction in
tegrity. VMotion allows IT organizations
to:
Continuously and automatically allocate virtual machines within resource pools.
Improve availability by conducting maintenance without disrupting business opera
tions
VMotion is a key enabling technology for creating the dynamic, automated, and se
lf-optimizing data center.
How Is VMware VMotion Used?
VMotion allows users to:
Automatically optimize and allocate entire pools of resources for maximum hardwa
re utilization, flexibility and
availability.
Perform hardware maintenance without scheduled downtime.
Proactively migrate virtual machines away from failing or underperforming server
s.
How Does VMotion work?
Live migration of a virtual machine from one physical server to another with VMo
tion is enabled by three underlying
technologies.
First, the entire state of a virtual machine is encapsulated by a set of files s
tored on shared storage such as Fibre Channel
or iSCSI Storage Area Network (SAN) or Network Attached Storage (NAS). VMware s cl
ustered Virtual Machine File
System (VMFS) allows multiple installations of ESX Server to access the same vir
tual machine files concurrently.
Second, the active memory and precise execution state of the virtual machine is
rapidly transferred over a high speed
network, allowing the virtual machine to instantaneously switch from running on
the source ESX Server to the destination
ESX Server. VMotion keeps the transfer period imperceptible to users by keeping
track of on-going memory transactions
in a bitmap. Once the entire memory and system state has been copied over to the
target ESX Server, VMotion
suspends the source virtual machine, copies the bitmap to the target ESX Server,
and resumes the virtual machine on
the target ESX Server. This entire process takes less than two seconds on a Giga
bit Ethernet network.
Third, the networks being used by the virtual machine are also virtualized by th
e underlying ESX Server, ensuring
that even after the migration, the virtual machine network identity and network
connections are preserved. VMotion
manages the virtual MAC address as part of the process.
Once the destination machine is activated, VMotion pings the network router to e
nsure that it is aware of the new

physical location of the virtual MAC address. Since the migration of a virtual m
achine with VMotion preserves the precise
execution state, the network identity, and the active network connections, the r
esult is zero downtime and no disruption
to users.
--------------------------------------------What is VirtualCenter?
VirtualCenter is virtual infrastructure management software that centrally manag
es an enterprise s virtual machines as a single, logical pool of resources. Virtua
lCenter provides:
Centralized virtual machine management. Manage hundreds of virtual machines from
one location through robust access controls.
Monitor system availability and performance. Configure automated notifications a
nd e-mail alerts.
Instant provisioning. Reduces server-provisioning time from weeks to tens of sec
onds.
Zero-downtime maintenance. Safeguards business continuity 24/7, without service
interruptions for hardware maintenance, deployment,or migration.
Continuous workload consolidation. Optimizes the utilization of data center reso
urces to minimize unused capacity.
SDK. Closely integrates 3rd-party management software with VirtualCenter, so tha
t the solutions you use today will work seamlessly within virtual infrastructure
.With VirtualCenter, an administrator can manage
------------------------------------------------What is Storage VMotion (SVMotion) and How do you perform a SVMotion using the V
I Plugin?
there are at least 3 ways to perform a SVMotion
from the remote command line, in
teractively from the command line, and with the SVMotion VI Client Plugin
Note:
You need to have VMotion configured and working for SVMotion
ly, there are a ton of caveats about SVMotion in the ESX 3.5
(page 245) that could cause SVMotion not to work. One final
works to move the storage for a VM from a local datastore on
shared datastore (a SAN) and back SVMotion will not move a
orage for a VM.

to work. Additional
administrator s guide
reminder, SVMotion
an ESX server to a
VM at all
only the st

---------------------------Overview of VMware ESX / VMware Infrastructure Advanced Features


#1 ESX Server & ESXi Server
Even if all that you purchase is the most basic VMware ESXi virtualization packa
ge at a cost of $495, you still gain a number of advanced features. Of course, v
irtualization, in general, offers many benefits, no matter the virtualization pa
ckage you choose. For example - hardware independence, better utilization of har
dware, ease of management, fewer data center infrastructure resources required,
and much more. While I cannot go into everything that ESX Server (itself) offers
, here are the major advanced features:
Hardware level virtualization no based operating system license is needed, ESXi
installs right on your hardware (bare metal installation).
VMFS file system
see advanced feature #2, below.

SAN Support
connectivity to iSCSI and Fibre Channel (FC) SAN storage, including
features like boot from SAN
Local SATA storage support.
64 bit guest OS support.
Network Virtualization virtual switches, virtual NICs, QoS & port configuration
policies, and VLAN.
Enhanced virtual machine performance
virtual machines may perform, in some cases
, even better in a VM than on a physical server because of features like transpa
rent page sharing and nested page table.
Virtual SMP
see advanced feature #4, below.
Support for up to 64GB of RAM for VMs, up to 32 logical CPUs and 256GB of RAM on
the host.
#2 VMFS
VMware s VMFS was created just for VMware virtualization. Thus, it is the highest
performance file system available to use in virtualizing your enterprise. While
VMFS is included with any edition or package of ESX Server or VI that you choose
, VMFS is still listed as a separate product by VMware. This is because it is so
unique.
VMFS is a high performance cluster file system allowing multiple systems to acce
ss the file system at the same time. VMFS is what gives you a solid platform to
perform VMotion and VMHA. With VMFS you can dynamically increase a volume, supp
ort distributed journaling, and the addition of a virtual disk on the fly.
#3 Virtual SMP
VMware s Virtual SMP (or VSMP) is the feature that allows a VMware ESX Server to u
tilize up to 4 physical processors on the host system, simultaneously. Additiona
lly, with VSMP, processing tasks will be balanced among the various CPUs.
#4 VM High Availability (VMHA)
One of the most amazing capabilities of VMware ESX is VMHA. With 2 ESX Servers,
a SAN for shared storage, Virtual Center, and a VMHA license, if a single ESX Se
rver fails, the virtual guests on that server will move over to the other server
and restart, within seconds. This feature works regardless of the operating sys
tem used or if the applications support it.
#5 VMotion & Storage VMotion
With VMotion, VM guests are able to move from one ESX Server to another with no
downtime for the users. VMotion is what makes DRS possible. VMotion also makes m
aintenance of an ESX server possible, again, without any downtime for the users
of those virtual guests. What is required is a shared SAN storage system between
the ESX Servers and a VMotion license.
Storage VMotion (or SVMotion) is similar to VMotion in the sense that "something
" related to the VM is moved and there is no downtime to the VM guest and end us
ers. However, with SVMotion the VM Guest stays on the server that it resides on
but the virtual disk for that VM is what moves. Thus, you could move a VM guest'
s virtual disks from one ESX server s local datastore to a shared SAN datastore (o
r vice versa) with no downtime for the end users of that VM guest. There are a n
umber of restrictions on this. To read more technical details on how it works, p
lease see the VMware ESX Server 3.5 Administrators Guide.
#6 VMware Consolidated Backup (VCB)
VMware Consolidated Backup (or VCB) is a group of Windows command line utilities
, installed on a Windows system, that has SAN connectivity to the ESX Server VMF
S file system. With VCB, you can perform file level or image level backups and r
estores of the VM guests, back to the VCB server. From there, you will have to f
ind a way to get those VCB backup files off of the VCB server and integrated int
o your normal backup process. Many backup vendors integrate with VCB to make tha
t task easier.

#7 VMware Update Manager


VMware Update Manager is a relatively new feature that ties into Virtual Center
& ESX Server. With Update Manager, you can perform ESX Server updates and Window
s and Linux operating system updates of your VM guests. To perform ESX Server up
dates, you can even use VMotion and upgrade an ESX Server without ever causing a
ny downtime to the VM guests running on it. Overall, Update Manager is there to
patch your host and guest systems to prevent security vulnerabilities from being
exploited.
#8 VMware Distributed Resource Scheduler (DRS)
VMware s Distributed Resource Scheduler (or DRS) is one of the other truly amazing
advanced features of ESX Server and the VI Suite. DRS is essentially a load-bal
ancing and resource scheduling system for all of your ESX Servers. If set to ful
ly automatic, DRS can recognize the best allocation of resource across all ESX S
erver and dynamically move VM guests from one ESX Server to another, using VMoti
on, without any downtime to the end users. This can be used both for initial pla
cement of VM guests and for continuous optimization (as VMware calls it). Addition
ally, this can be used for ESX Server maintenance.
#9 VMware s Virtual Center (VC) & Infrastructure Client (VI Client)
I prefer to list the VMware Infrastructure client & Virtual Center as one of the
advanced features of ESX Server & the VI Suite. Virtual Center is a required pi
ece of many of the advanced ESX Server features. Also, VC has many advanced feat
ures in its own right. When tied with VC, the VI Client is really the interface
that a VMware administrator uses to configure, optimize, and administer all of y
ou ESX Server systems.
With the VI Client, you gain performance information, security & role administra
tion, and template-based rollout of new VM guests for the entire virtual infrast
ructure. If you have more than 1 ESX Server, you need VMware Virtual Center.
#10 VMware Site Recovery Manager (SRM)
Recently announced for sale and expected to be shipping in 30 days, VMware s Site
Recovery Manager is a huge disaster recovery feature. If you have two data cente
rs (primary/protected and a secondary/recovery), VMware ESX Servers at each site
, and a SRM supported SAN at each site, you can use SRM to plan, test, and recov
er your entire VMware virtual infrastructure.
VMware ESX Server vs. the VMware Infrastructure Suite
VMware ESX Server is packaged and purchased in 4 different packages.
VMware ESXi
the slimmed down (yet fully functional) version of ESX server that h
as no service console. By buying ESXi, you get VMFS and virtual SMP only.
VMware Infrastructure Foundation (previously called the starter kit, the Foundat
ion package includes ESX or ESXi, VMFS, Virtual SMP, Virtual Center agent, Conso
lidated backup, and update manager.
VMware Infrastructure Standard includes ESX or ESXi, VMFS, Virtual SMP, Virtual
center agent, consolidated backup, update manager, and VMware HA.
VMware Infrastructure Enterprise includes ESX or ESXi, VMFS, Virtual SMP, Virtua
l center agent, consolidated backup, update manager, VMware HA, VMotion, Storage
VMotion, and DRS.
You should note that Virtual Center is required for some of the more advanced fe
atures and it is purchased separately. Also, there are varying levels of support
available for these products. As the length and the priority of your support pa
ckage increase, so does the cost
---------------------------------------Advantages of VMFS

VMware s VMFS was created just for VMware virtualization. VMFS is a high performan
ce cluster file system allowing multiple systems to access the file system at th
e same time. VMFS is what gives you the necessary foundation to perform VMotion
and VMHA. With VMFS you can dynamically increase a volume, support distributed
journaling, and the addition of a virtual disk on the fly.
-----------------Virtual center licenses issues
However, all licensed functionality currently operating at the time the license
server
becomes unavailable continues to operate as follows:
?? All VirtualCenter licensed features continue to operate indefinitely, relying
on a cached version of the license state. This includes not only basic VirtualC
enter
operation, but licenses for VirtualCenter add-ons, such as VMotion and DRS.
?? For ESX Server licensed features, there is a 14-day grace period during which
hosts continue operation, relying on a cached version of the license state, eve
n across
reboots. After the grace period expires, certain ESX Server operations, such as
powering on virtual machines, become unavailable.
--------------During the ESX Server grace period, when the license server is unavailable, the
following operations are unaffected:
?? Virtual machines continue to run. VI Clients can configure and operate virtua
l machines.
?? ESX Server hosts continue to run. You can connect to any ESX Server host in t
he VirtualCenter inventory for operation and maintenance. Connections to the
VirtualCenter Server remain. VI Clients can operate and maintain virtual machine
s from their host even if the VirtualCenter Server connection is also lost.
During the grace period, restricted operations include:
?? Adding ESX Server hosts to the VirtualCenter inventory. You cannot change Vir
tualCenter agent licenses for hosts.
?? Adding or removing hosts from a cluster. You cannot change host membership fo
r the current VMotion, HA, or DRS configuration.
?? Adding or removing license keys.
When the grace period has expired, cached license information is no longer store
d.
As a result, virtual machines can no longer be powered on. Running virtual machi
nes continue to run but cannot be rebooted.
When the license server becomes available again, hosts reconnect to the license
server.
No rebooting or manual action is required to restore license availability. The g
race period timer is reset whenever the license server becomes available again.

------------------By default, ESX has 22 different users and 31 groups.


VMware ESX Server, you have 4 Roles, by default
# vdf -h
recovery command for vmkfstools. It failed.
# vmkfstools -R /vmfs/volumes/SAN-storage-2/
ESX host's system UUID found in the /etc/vmware/esx.conf file.

Recently VMware added a some what useful command line tool named vmfs-undelete w
hich exports metadata to a recovery log file which can restore vmdk block addres
ses in the event of deletion. It's a simple tool and at present it's experimenta
l and unsupported and is not available on ESXi. The tool of course demands that
you were proactive and ran it's backup function in order to use it. Well I think
this falls well short of what we need here. What if you have no previous backup
s of the VMFS configuration, so we really need to know what to look for and how
to correct it and that's exactly why I created this blog.
the command to scan luns --esxcfg-mpath -l and also check var\log\vmkernal

------------Is HA dependent on virtualcenter (Only for Install)


What is the Maximum Host Failure allowed in a cluster (4)
How does HA know to restart a VM from a dropped Host (storage lock will be remov
ed from the metadata)
How many iSCSI targets will ESX support 8 for 3.01, (64 for 3.5)
How Many FiberChannel targets(256) (128 on Install)
What is Vmotion(ability to move running vm from one host to another)
Ask what is the different when you use viclient connect to VC and directly to ES
X server itself. When you connect to VC you manage ESX server via vpxa (Agent on
esx server). Vpxa then pass those request to hostd (management service on esx s
erver). When you connect to ESX server directly, you connect to hostd (bypass vp
xa). You can extend this to a trobleshoot case, where connect to esx see one thi
ng and connect to VC see another. So the problem is most likely out of sync betw
een hostd and vpxa, "service vmware-vpxa restart" should take care of it.

the default partitions in vmware --/ (5 gb),/boot(100mb),/swap(544 mb) /vmkcore


(100) ,/vmfs
Types of licensing : starter and standard
Starter : limited production oriented features ( Local and NAS storage only)
Standard ; production oriented features and all the add-on licenses can be confi
gured with this edition.Unlimited max number of vms,san,iscsi,nas, vsmp support
and VCB addon at additional cost.
What is ha and drs
what is the ports used by license server
What is ports used by virtual center
Waht is the user roles in virtual center
how do you check the corrupted storage information
how many vm's you can use in Virtual center
how many esx hosts you can connect in Virtual center
Vmware service console manages firewall, snmp agent, apache tomcat other service
s like HA & DRs
Vmware vitual machine files are vmdk, vmx (configuration), nvram (BIOS file), Lo
g file
The eSx server hypervisor is known as vmkernal.
Hypervisor ESX server offers basic partition of server resources, however it als
o acts as the foundation for virtual infrastructure software enabling Vmotion,DR
S and so forth kerys to the dynamic, automated datacenter.
host agent on each managed host software collects , communicates, and executes t
he action recieved through the VI client. it is installed as a part of the ESX S
erver Installation.
Virtual Center agent : On each managed host, software that collects commmunicate
s and executes the actions received from the virtual center server. Ther virtual
center agent is installed the firsttime any host is added to the virtual center
inventory.
ESX server installation requirements 1500 mhz cpu intel or amd, memory 1 gb mini
mum up to 256 mb, 4 gb hard drive space,
The configuration file that manages the mapping of service console file system t
o mount point is /etc/fstab
---ESX mount points at the time of installaiton
/boot 100 mb ext3
/
5 gb ext3
swap 544mb
/va/log 2 gb
/vmfs/volumes as required vmfs-3

vmkcore 100 mb vmkcore


-------------/cciss/c0d0 consider as local SCSI storage
/dev/sda is storage network based lun
---VI client provides direct access to an ESX server for configuration and virtual
machine management
The VI Client is also used to access Vitual center to provide management configu
ration and monitoring of all ESX servers and their virtual machines within the v
irtual infractur environment. However when using the VI client to connect direct
ly to the ESX server, no management of Virtual center feature is possible.
EG; you cannot cofigure and administer, Vmware DRS or Vmware HA.
---Vmware license mode : default 60 days trail. after 60 days you can create VM's b
ut you cannot power on VM's
--The license types are Foundation, Standard and Enterprise
Fondation license : VMFS,Virtual SMP, Virtual center agent, Vmware update manage
r, VCB
VI standard license : Foundation license + HA feature
Enterprise : Foundation license + STandard license + Vmotion, VM Storage vmotion
, and VMware DRS
--------Virtual machines time sync with ESX host
-By default the first service console network connection is always named service
console. It always in Vswitch 0. The switch always connects vmnic 0
-------To gather vmware diagnostics information run the script vm-spport from the servi
ce console
If you generate the diagnostic information, this information will be stored in V
mware-virtualcenter-support-date@time folder
The folder contains Viclient-support, which will holds vi client log files
Another file is esx-support-date@time.tgz, which is compressed archive file cont
ains esx server diagnostics information
----

The virtual switch work at layer 2 of the OSI model.


You cannot have two virtual switchs mapped to one physical NIC
You can map two or more physical NIC mapped to one virtual switch.
A swithc used by VMkernal for accessing ISCSI or NAS based storage
Virtual switch used to give the service console access to a management LaN
Vitual swich can have 1016 ports adn 8 ports used for management purpose total i
s 1024
Virtual switch defalt ports are 56
ESX server can have 4096 ports max
Maximum 4 virtual NIC per VM
ESX server can have 32 NIC on Intel NIC, 20 Broadcom NIC's
---Three type of network connections
Service console port : access to ESX server management network
Vmkernal port : Access to vmotion,ISCSI,NFS/NAS networks
Virtual machine port group : Access to VM networks
Service console port requires network lable,VLAN ID optional, Static ip or DHCP
Multiple service console connections can be created only if they are configured
on different network. In addition only a single service console gateway, IP addr
ess can be defined
--------------A VMkernal port allow to use ISCSI, NAS based networks. Vmkernal port is requied
for Vmotion.
It requires network lablel, vlan id optional, IP setting
Multiple Vmkernal connections can be configured only if they are configured on a
different networks,only single vmkernal gateway Ip address can be defined.
---------Virtual machine port group required
A network lable
VLAN id optional
------to list physical nics
esxcfg -nics -l
-------------

Three network policies are available for the vswitch


Security
Traffic shaping
NIC teaming
----------Network security policy mode
Promiscos mode : when set to reject, placing a guest adapter in promiscuous mode
has no which frames are received by the adapter
Mac address changer : when set to reject, if the guest attempts to change the MA
C adress assigned to the virtual NIC, it stops receiving frames
Forged trasmitts - When set to reject drops any frames that the guest sends, whe
re the source address field contains a MAC address other than the assigned virtu
al NIC mac addresss ( default accept)
--------32 ESX hosts can user a singel shared storage
------vmhba0:0:11:3
adapter 0 : target id : LUN: partition
----------SNmp incomming port 161
ISCSI client

out going port 162

outgoing port 3260

Virtual center agent 902


ntp client
VCB

123 port

443, 902 ports

----------the defaul ISCSI storage adapter is vmhba32


ISCSI follow Iqn naming convention
------------ISCSI uses CHAP authuntication
--Vmware license port is 27000
---After changing made at command line for reflecting the changes you need to start
the hostd daemon

service vmware-mgmt restart


---------------View the iSCSI name assigned to the iSCSI software adapter:
vmkiscsi-tool -I -1 vmhba40
View the iSCSI alias assigned to the iSCSI software adapter:
vmkiscsi-tool -k -1 vmhba40
---Login to the service console as root and execute e sxc f g - vmhbadevs to identi
fy which
LUNs are currently seen by the ESX server.
# esxcfg-vmhbadevs
Run the esxcf g-vmhbadevs command with the -m option to map VMFS names to
VMFS UUIDs. Note that the LUN partition numbers are shown in this output. The
hexidecimal values are described later.
# esxcfg-vmhbadevs -m
------------Use the vdf -h comand to identify disk statistics (Size, Used, Avail, Use%, Moun
ted
on) for all file system volumes recognized by your ESX host.
List the contents of the /vmfs/volumes directory. The hexidecimal numbers (in da
rk blue) are
unique VMFS names. The names in light blue are the VMFS labels. The labels are
symbolically linked to the VMFS volumes.
ls -l \vmfs\volumes
----------Using the Linux device name (obtained using e sxc f g - vmhbadevs command), chec
k
LUNs A, B and C to see if any are partitioned.
If there is no partition table, example a. below, go to step 3. If there is a ta
ble, example b. go
to step 2.
# fdisk -1 /dev/sd<?>
-----------1. Format a partitioned LUN using vmkf s tool s . Use the - C and - S options re
spectively,
to create and label the volume. Using the command below, create a VMFS volume on
LUN A.
Ask your instructor if you should use a custom VMFS label name.
# vmkfstools -C vmfs3 -S LUNc#> vmhbal:O:#:l
---------------Now that the LUN has been partitioned and formatted as a VMFS volume, it can be
used as a
datastore. Your ESX host recognizes these new volumes.

vdf -h
--------------------Use the esxcf g-vmhbadevs command with the -m option to map the VMFS hex
names to SAN LUNs.
# esxcfg-vmhbadevs -m
-------------It may be helpful to change the label to identify that this VMFS volume is spann
ed.
Add - spanned to the VMFS label name.
# In -sf /vmfs/volumes/<V~~S-UUID> /vmfs/volumes/
<New- L abel- N ame>
------------In order to remove a span, you must reformat LUN B with a new VMFS volume (becau
se it
was the LUN that was spanned to).
THIS WILL DELETE ALL DATA ON BOTH LUNS IN THE SPAN !
# vmkfstools -C vmfs3 -S <label> vmhbal:O:#:l
-----------------------Enable the ntpclient service on the Service Console
# esxcfg-firewall -e ntpclient
-----------Determine if the NTP daemon starts when the system boots.
# chkconfig --list ntpd
-----Configure the system to synchronize the hwclock and the operating system clocks
each time
the ESX Server host is rebooted.
# nano -w /etc/sysconfig/clock
UTC= t rue
-----------List the available services.
# esxcfg-firewall -s
------------Communication between VI client and ESX server the ports reqired 902, 903
Communication between VI client and virtual center 902
Communication between VI web access client and ESX 80, 443
Communication between VI client and virtual center 80, 443
Communication between ESX server and License server 27010 (in), 27000(out)
ESX server in a vmware HA cluster 2050 -5000 (in), 8042-8045 (out)
ESX serever during VMotion 8000

The required port for ISCSI 3260, NFS : 2049


Update manager SOAP port - 8086
Update manager Web port - 9084
Vmware converter SOAP port 9085
Vmware converter Web port 9084
------------vcbMounter is used, among other things, to create the snapshot for the 3rd party
backup software to access:
vcbMounter -h <VC - IP - address - or - hostname>
-u <VC- u ser- a ccount>
-p cVC user password>
-a ~~dzntifi-eo rf - t he- V M -t o- b ackup>
-r <Directory - on - VCB Proxy - to - putbackup>
-t <Backup - type: - file - or - fullvm>
-----------------List the different ways to identify your virtual machine. To do this, use the
vcbVmName command:
vcbVmName
-h < V i r t u a l Center - Server-IP-Address-or-Hostname>
-u < V i r t u a l c e n t e r- S e r v e r- u s e r- a ccount>
-p < V i r t u a l c e n t e r- S e r v e r- p assword>
-s ipaddr:<IP - address - of - virtual-machine - to - backup>
------------------------Unmount the virtual disk(s):
mountvm -u c:\backups\tempmnt
----------------VMFS Volume can be created one partition 256 GB in the maimum size of a VM
For a LUN 32extents can be added up to 64 TB
8 mount points for NFS are the maximum
----------------Service console will use 272 mb
---------------The files for vmware virtual machine
vmname.vmx --virtual machine configuration file
vmname.vmdk -- actual virtual hard drive for the virtual guest operation system
vmname_flot.vmdk--preallocated space
vmname.log --virtual machine log file
vmname.vswap -- vm swap file

vmname.vmsd ---vm snapshot file


Log files should be used only when you are having trouble with a virtual machine
.
VMDK files
VMDK files are the actual virtual hard drive for the virtual guest op
eration system (virtual machine / VM). You can create either dynamic or fixed vi
rtual disks. With dynamic disks, the disks start small and grow as the disk insi
de the guest OS grows. With fixed disks, the virtual disk and guest OS disk star
t out at the same (large) disk. For more information on monolithic vs. split dis
ks see this comparison from sanbarrow.com.
VMEM A VMEM file is a backup of the virtual machine s paging file. It will only ap
pear if the virtual machine is running, or if it has crashed.
VMSN & VMSD files these files are used for VMware snapshots. A VMSN file is used
to store the exact state of the virtual machine when the snapshot was taken. Us
ing this snapshot, you can then restore your machine to the same state as when t
he snapshot was taken. A VMSD file stores information about snapshots (metadata)
. You ll notice that the names of these files match the names of the snapshots.
NVRAM files
these files are the BIOS for the virtual machine. The VM must know h
ow many hard drives it has and other common BIOS settings. The NVRAM file is whe
re that BIOS information is stored.
VMX files a VMX file is the primary configuration file for a virtual machine. Wh
en you create a new virtual machine and answer questions about the operating sys
tem, disk sizes, and networking, those answers are stored in this file. As you c
an see from the screenshot below, a VMX file is actually a simple text file that
can be edited with Notepad. Here is the Windows XP Professional.vmx file from the
directory listing, above:
------------we can create VM
1. Vm from scratch
2.Deploy from templete
3. Cloned
4. P2V
5. Iso file
6..vmx file
----------Max CPU's per core is 4 to 8 vcpu's
----------------At the time of vomotion arp notification will be released
70 to 80 % will be copied to the other ESX host
a bit map file will be created, and uses will be working on the bitmap file
and the changes will be copied to the other ESX host
How vmotion works : -Live migration of a virtual machine from one physical server to

another with VMotion is enabled by three underlying technologies.


First, the entire state of a virtual machine is encapsulated by a set
of files stored on shared storage such as Fibre Channel or iSCSI
Storage Area Network (SAN) or Network Attached Storage (NAS).
VMware s clustered Virtual Machine File System (VMFS) allows
multiple installations of ESX Server to access the same virtual
machine files concurrently.
Second, the active memory and precise execution state of the
virtual machine is rapidly transferred over a high speed network,
allowing the virtual machine to instantaneously switch from
running on the source ESX Server to the destination ESX Server.
VMotion keeps the transfer period imperceptible to users by
keeping track of on-going memory transactions in a bitmap. Once
the entire memory and system state has been copied over to the
target ESX Server, VMotion suspends the source virtual machine,
copies the bitmap to the target ESX Server, and resumes the virtual
machine on the target ESX Server. This entire process takes less
than two seconds on a Gigabit Ethernet network.
Third, the networks being used by the virtual machine are also virtualized
by the underlying ESX Server, ensuring that even after the
migration, the virtual machine network identity and network connections
are preserved. VMotion manages the virtual MAC address
as part of the process. Once the destination machine is activated,
VMotion pings the network router to ensure that it is aware of the
new physical location of the virtual MAC address. Since the migration
of a virtual machine with VMotion preserves the precise execution
state, the network identity, and the active network connections,
the result is zero downtime and no disruption to users.
----------------------------------DRS
DRS will balance the workload across the resources you presented to the cluster.
It is an essential component of any successful ESX implementation.
With VMware ESX 3.x and VirtualCenter 2.x, it's possible to configure VirtualCen
ter to manage the access to the resources automatically, partially, or manually
by an administrator.
This option is particularly useful for setting an ESX server into maintenance mo
de. Maintenance mode is a good environment to perform tasks such as scanning for
new storage area network (SAN) disks, reconfiguring the host operating system's
networking or shutting down the server for maintenance. Since virtual machines
can't be run during maintenance mode, the virtual machines need to be relocated
to other host servers. Commonly, administrators will configure the ESX cluster t
o fully automate the rules for the DRS settings. This allows VirtualCenter to ta
ke action based on workload statistics, available resources and available host s
ervers
An important point to keep in mind is that DRS works in conjunction with any est
ablished resource pools defined in the VirtualCenter configuration. Poor resourc
e pool configuration (such as using unlimited options) can cause DRS to make unn
ecessary performance adjustments. If you truly need to use unlimited resources w
ithin a resource pool the best practice would be to isolate. Isolation requires
a separate ESX cluster with a limited number of ESX hosts that share a single re
source pool where the virtual machines that require unlimited resources are allo
wed to operate. Sharing unlimited setting resource pools with limited setting re

source pools within the same cluster could cause DRS to make unnecessary perform
ance adjustments. DRS can compensate for this scenario, but that could be by byp
assing any resource provisioning and planning previously established.
---------------------How VMotion works with DRS
The basic concept of VMotion is that ESX will move a virtual machine while it is
running to another ESX host with the move being transparent to the virtual mach
ine.
ESX requires a dedicated network interface at 1 GB per second or greater,
shared storage and a virtual machine that can be moved.
Not all virtual machines can be moved. Certain situations, such as optical image
binding to an image file, prevent a virtual machine from migrating. With VMotio
n enabled, an active virtual machine can be moved automatically or manually from
one ESX host to another. An automatic situation would be as described earlier w
hen a DRS cluster is configured for full automation. When the cluster goes into
maintenance mode, the virtual machines are moved to another ESX host by VMotion.
Should the DRS cluster be configured for all manual operations, the migration v
ia VMotion is approved within the Virtual Infrastructure Client, then VMotion pr
oceeds with the moves.
VMware ESX 3.5 introduces the highly anticipated Storage VMotion. Should your sh
ared storage need to be brought offline for maintenance,Storage VMotion can migr
ate an active virtual machine to another storage location. This migration will t
ake longer, as the geometry of the virtual machine's storage is copied to the ne
w storage location. Because this is not a storage solution, the traffic is manag
ed through the VMotion network interface.
Points to consider
One might assume that with the combined use of DRS and VMotion that all bases ar
e covered. Well, not entirely. There are a few considerations that you need to b
e aware of so that you know what DRS and VMotion can and cannot do for you.
VMotion does not give an absolute zero gap of connectivity during a migration. I
n my experiences the drop in connectivity via ping is usually limited to one pin
g from a client or a miniscule increase in ping time on the actual virtual machi
ne. Most situations will not notice the change and reconnect over the network du
ring a VMotion migration. There also is a slight increase in memory usage and on
larger virtual machines this may cause a warning light on RAM usage that usuall
y clears independently.
Some virtual machines may fail to migrate, whether by automatic VMotion task or
if evoked manually. This is generally caused by obsolete virtual machines, CD-RO
M binding or other reasons that may not be intuitive. In one migration failure I
experienced recently, the Virtual Infrastructure client did not provide any inf
ormation other than the operation timed out. The Virtual Center server had no in
formation related to the migration task in the local logs. In the database
Identification of your risks is the most important pre-implementation task you c
an do with DRS and VMotion. So what can you do to identify your risks? Here are
a couple of easy tasks:
Schedule VMotion for all systems to keep them moving across hosts.
Regularly put ESX hosts in and then exit maintenance mode.

Do not leave mounted CD-ROM media on virtual machines (datastore/ISO file or hos
t device options).
Keep virtual machines up to date with VMware tools and virtual machine versionin
g.
Monitor the VPX_EVENT table in your ESX database for the EVENT_TYPE = vim.event.
VmFailedMigrateEvent
All in all, DRS and VMotion are solid technologies. Anomalies can happen, and th
e risks should be identified and put into your regular monitoring for visibility
.
VMotion usage scenarios
Now that VMotion is enabled on two or more hosts, when should it be used? There
are two primary reasons to use VMotion: to balance the load on the physical ESX
servers and eliminate the need to take a service offline in order to perform mai
ntenance on the server.
VI3 balances its load by using a new feature called DRS. DRS is included in the
VI3 Enterprise edition along with VMotion. This is because DRS uses VMotion to b
alance the load of an ESX cluster in real time between all of the server involve
d in the cluster. For information on how to configure DRS see page 95 of the VMw
are VI3 Resource Management Guide. Once DRS is properly configured it will const
antly be evaluating how best to distribute the load of running VMs amongst all o
f the host servers involved in the DRS-enabled cluster. If DRS decides that a pa
rticular VM would be better suited to run on a different host then it will utili
ze VMotion to seamlessly migrate the VM over to the other host.
While DRS migrates VMs here and there with VMotion, it is also possible to migra
te all of the VMs off of one host server (resources permitting) and onto another
. This is accomplished by putting a server into "maintenance mode." When a serve
r is put into maintenance mode, VMotion will be used to migrate all of the runni
ng VMs off it onto another server. This way it is possible to bring the first se
rver offline to perform physical maintenance on it without impacting the service
s that it provides.
How VMotion works
As stated above, VMotion is the process that VMware has invented to migrate, or
move, a virtual machine that is powered on from one host server to another host
server without the VM incurring downtime. This is known as a "hot-migration." Ho
w does this hot-migration technology that VMware has dubbed VMotion work? Well,
as with everything, in a series of steps:
A request has been made that VM-A should be migrated (VMotioned) from ESX-A to E
SX-B
VM-A's memory is pre-copied from ESX-A to ESX-B while ongoing changes are writte
n to a memory bitmap on ESX-A.
VM-A is quiesced on ESX-A and VM-A's memory bitmap is copied to ESX-B.
VM-A is started on ESX-B and all access to VM-A is now directed to the copy runn
ing on ESX-B.
The rest of VM-A's memory is copied from ESX-A all the while memory is being rea
d and written from VM-A on ESX-A when applications attempt to access that memory
on VM-A on ESX-B.
If the migration is successful VM-A is unregistered on ESX-A.
For a VMotion event to be successful the following must be true:
*Editor's Note: Special thanks to Colin Stamp of IBM United Kingdom Ltd. for rew

riting the following list.


The VM cannot be connected to an internal vswitch.
The VM cannot be connected to a CD-ROM or floppy drive that is using an ISO or f
loppy image stored on a drive that is local to the host server.
The VM's affinity must not be set, i.e., binding it to physical CPU(s).
The VM must not be clustered with another VM (using a cluster service like the M
icrosoft Cluster Service (MSCS)).
The two ESX servers involved must both be using (the same!) shared storage.
The two ESX servers involved must be connected via Gigabit Ethernet (or better).
The two ESX servers involved must have access to the same physical networks.
The two ESX servers involved must not have virtual switch port groups that are l
abeled the same.
The two ESX servers involved must have compatible CPUs. (See support on Intel an
d AMD).
If any of the above conditions are not met, VMotion is not supported and will no
t start. The simplest way to test these conditions is to attempt a manual VMotio
n event. This is accomplished by right-clicking on VM in the VI3 client and clic
king on "Migrate..." The VI3 client will ask to which host this VM should be mig
rated. When a host is selected, several validation checks are performed. If any
of the above conditions are true then the VI3 client will halt the VMotion opera
tion with an error.
Conclusion
The intent of this article was to provide readers with a solid grasp of what VMo
tion is and how it can benefit them. If you have any outstanding questions with
regards to VMotion or any VMware technology please do not hesitate to send them
to me via ask the experts.
-----------------------------------------------------------------------What Is VMware VMotion?
VMware VMotion enables the live migration of running virtual machines from one phy
sical server to another
with zero downtime, continuous service availability, and complete transaction in
tegrity. VMotion allows IT organizations
to:
Continuously and automatically allocate virtual machines within resource pools.
Improve availability by conducting maintenance without disrupting business opera
tions
VMotion is a key enabling technology for creating the dynamic, automated, and se
lf-optimizing data center.
How Is VMware VMotion Used?
VMotion allows users to:
Automatically optimize and allocate entire pools of resources for maximum hardwa
re utilization, flexibility and
availability.
Perform hardware maintenance without scheduled downtime.
Proactively migrate virtual machines away from failing or underperforming server
s.
How Does VMotion work?

Live migration of a virtual machine from one physical server to another with VMo
tion is enabled by three underlying
technologies.
First, the entire state of a virtual machine is encapsulated by a set of files s
tored on shared storage such as Fibre Channel
or iSCSI Storage Area Network (SAN) or Network Attached Storage (NAS). VMware s cl
ustered Virtual Machine File
System (VMFS) allows multiple installations of ESX Server to access the same vir
tual machine files concurrently.
Second, the active memory and precise execution state of the virtual machine is
rapidly transferred over a high speed
network, allowing the virtual machine to instantaneously switch from running on
the source ESX Server to the destination
ESX Server. VMotion keeps the transfer period imperceptible to users by keeping
track of on-going memory transactions
in a bitmap. Once the entire memory and system state has been copied over to the
target ESX Server, VMotion
suspends the source virtual machine, copies the bitmap to the target ESX Server,
and resumes the virtual machine on
the target ESX Server. This entire process takes less than two seconds on a Giga
bit Ethernet network.
Third, the networks being used by the virtual machine are also virtualized by th
e underlying ESX Server, ensuring
that even after the migration, the virtual machine network identity and network
connections are preserved. VMotion
manages the virtual MAC address as part of the process.
Once the destination machine is activated, VMotion pings the network router to e
nsure that it is aware of the new
physical location of the virtual MAC address. Since the migration of a virtual m
achine with VMotion preserves the precise
execution state, the network identity, and the active network connections, the r
esult is zero downtime and no disruption
to users.
--------------------------------------------What is VirtualCenter?
VirtualCenter is virtual infrastructure management software that centrally manag
es an enterprise s virtual machines as a single, logical pool of resources. Virtua
lCenter provides:
Centralized virtual machine management. Manage hundreds of virtual machines from
one location through robust access controls.
Monitor system availability and performance. Configure automated notifications a
nd e-mail alerts.
Instant provisioning. Reduces server-provisioning time from weeks to tens of sec
onds.
Zero-downtime maintenance. Safeguards business continuity 24/7, without service
interruptions for hardware maintenance, deployment,or migration.
Continuous workload consolidation. Optimizes the utilization of data center reso
urces to minimize unused capacity.
SDK. Closely integrates 3rd-party management software with VirtualCenter, so tha
t the solutions you use today will work seamlessly within virtual infrastructure
.With VirtualCenter, an administrator can manage
------------------------------------------------What is Storage VMotion (SVMotion) and How do you perform a SVMotion using the V
I Plugin?

there are at least 3 ways to perform a SVMotion


from the remote command line, in
teractively from the command line, and with the SVMotion VI Client Plugin
Note:
You need to have VMotion configured and working for SVMotion
ly, there are a ton of caveats about SVMotion in the ESX 3.5
(page 245) that could cause SVMotion not to work. One final
works to move the storage for a VM from a local datastore on
shared datastore (a SAN) and back SVMotion will not move a
orage for a VM.

to work. Additional
administrator s guide
reminder, SVMotion
an ESX server to a
VM at all
only the st

---------------------------Overview of VMware ESX / VMware Infrastructure Advanced Features


#1 ESX Server & ESXi Server
Even if all that you purchase is the most basic VMware ESXi virtualization packa
ge at a cost of $495, you still gain a number of advanced features. Of course, v
irtualization, in general, offers many benefits, no matter the virtualization pa
ckage you choose. For example - hardware independence, better utilization of har
dware, ease of management, fewer data center infrastructure resources required,
and much more. While I cannot go into everything that ESX Server (itself) offers
, here are the major advanced features:
Hardware level virtualization no based operating system license is needed, ESXi
installs right on your hardware (bare metal installation).
VMFS file system
see advanced feature #2, below.
SAN Support
connectivity to iSCSI and Fibre Channel (FC) SAN storage, including
features like boot from SAN
Local SATA storage support.
64 bit guest OS support.
Network Virtualization virtual switches, virtual NICs, QoS & port configuration
policies, and VLAN.
Enhanced virtual machine performance
virtual machines may perform, in some cases
, even better in a VM than on a physical server because of features like transpa
rent page sharing and nested page table.
Virtual SMP
see advanced feature #4, below.
Support for up to 64GB of RAM for VMs, up to 32 logical CPUs and 256GB of RAM on
the host.
#2 VMFS
VMware s VMFS was created just for VMware virtualization. Thus, it is the highest
performance file system available to use in virtualizing your enterprise. While
VMFS is included with any edition or package of ESX Server or VI that you choose
, VMFS is still listed as a separate product by VMware. This is because it is so
unique.
VMFS is a high performance cluster file system allowing multiple systems to acce
ss the file system at the same time. VMFS is what gives you a solid platform to
perform VMotion and VMHA. With VMFS you can dynamically increase a volume, supp
ort distributed journaling, and the addition of a virtual disk on the fly.
#3 Virtual SMP
VMware s Virtual SMP (or VSMP) is the feature that allows a VMware ESX Server to u
tilize up to 4 physical processors on the host system, simultaneously. Additiona
lly, with VSMP, processing tasks will be balanced among the various CPUs.
#4 VM High Availability (VMHA)
One of the most amazing capabilities of VMware ESX is VMHA. With 2 ESX Servers,

a SAN for shared storage, Virtual Center, and a VMHA license, if a single ESX Se
rver fails, the virtual guests on that server will move over to the other server
and restart, within seconds. This feature works regardless of the operating sys
tem used or if the applications support it.
#5 VMotion & Storage VMotion
With VMotion, VM guests are able to move from one ESX Server to another with no
downtime for the users. VMotion is what makes DRS possible. VMotion also makes m
aintenance of an ESX server possible, again, without any downtime for the users
of those virtual guests. What is required is a shared SAN storage system between
the ESX Servers and a VMotion license.
Storage VMotion (or SVMotion) is similar to VMotion in the sense that "something
" related to the VM is moved and there is no downtime to the VM guest and end us
ers. However, with SVMotion the VM Guest stays on the server that it resides on
but the virtual disk for that VM is what moves. Thus, you could move a VM guest'
s virtual disks from one ESX server s local datastore to a shared SAN datastore (o
r vice versa) with no downtime for the end users of that VM guest. There are a n
umber of restrictions on this. To read more technical details on how it works, p
lease see the VMware ESX Server 3.5 Administrators Guide.
#6 VMware Consolidated Backup (VCB)
VMware Consolidated Backup (or VCB) is a group of Windows command line utilities
, installed on a Windows system, that has SAN connectivity to the ESX Server VMF
S file system. With VCB, you can perform file level or image level backups and r
estores of the VM guests, back to the VCB server. From there, you will have to f
ind a way to get those VCB backup files off of the VCB server and integrated int
o your normal backup process. Many backup vendors integrate with VCB to make tha
t task easier.
#7 VMware Update Manager
VMware Update Manager is a relatively new feature that ties into Virtual Center
& ESX Server. With Update Manager, you can perform ESX Server updates and Window
s and Linux operating system updates of your VM guests. To perform ESX Server up
dates, you can even use VMotion and upgrade an ESX Server without ever causing a
ny downtime to the VM guests running on it. Overall, Update Manager is there to
patch your host and guest systems to prevent security vulnerabilities from being
exploited.
#8 VMware Distributed Resource Scheduler (DRS)
VMware s Distributed Resource Scheduler (or DRS) is one of the other truly amazing
advanced features of ESX Server and the VI Suite. DRS is essentially a load-bal
ancing and resource scheduling system for all of your ESX Servers. If set to ful
ly automatic, DRS can recognize the best allocation of resource across all ESX S
erver and dynamically move VM guests from one ESX Server to another, using VMoti
on, without any downtime to the end users. This can be used both for initial pla
cement of VM guests and for continuous optimization (as VMware calls it). Addition
ally, this can be used for ESX Server maintenance.
#9 VMware s Virtual Center (VC) & Infrastructure Client (VI Client)
I prefer to list the VMware Infrastructure client & Virtual Center as one of the
advanced features of ESX Server & the VI Suite. Virtual Center is a required pi
ece of many of the advanced ESX Server features. Also, VC has many advanced feat
ures in its own right. When tied with VC, the VI Client is really the interface
that a VMware administrator uses to configure, optimize, and administer all of y
ou ESX Server systems.
With the VI Client, you gain performance information, security & role administra
tion, and template-based rollout of new VM guests for the entire virtual infrast
ructure. If you have more than 1 ESX Server, you need VMware Virtual Center.

#10 VMware Site Recovery Manager (SRM)


Recently announced for sale and expected to be shipping in 30 days, VMware s Site
Recovery Manager is a huge disaster recovery feature. If you have two data cente
rs (primary/protected and a secondary/recovery), VMware ESX Servers at each site
, and a SRM supported SAN at each site, you can use SRM to plan, test, and recov
er your entire VMware virtual infrastructure.
VMware ESX Server vs. the VMware Infrastructure Suite
VMware ESX Server is packaged and purchased in 4 different packages.
VMware ESXi
the slimmed down (yet fully functional) version of ESX server that h
as no service console. By buying ESXi, you get VMFS and virtual SMP only.
VMware Infrastructure Foundation (previously called the starter kit, the Foundat
ion package includes ESX or ESXi, VMFS, Virtual SMP, Virtual Center agent, Conso
lidated backup, and update manager.
VMware Infrastructure Standard includes ESX or ESXi, VMFS, Virtual SMP, Virtual
center agent, consolidated backup, update manager, and VMware HA.
VMware Infrastructure Enterprise includes ESX or ESXi, VMFS, Virtual SMP, Virtua
l center agent, consolidated backup, update manager, VMware HA, VMotion, Storage
VMotion, and DRS.
You should note that Virtual Center is required for some of the more advanced fe
atures and it is purchased separately. Also, there are varying levels of support
available for these products. As the length and the priority of your support pa
ckage increase, so does the cost
---------------------------------------Advantages of VMFS
VMware s VMFS was created just for VMware virtualization. VMFS is a high performan
ce cluster file system allowing multiple systems to access the file system at th
e same time. VMFS is what gives you the necessary foundation to perform VMotion
and VMHA. With VMFS you can dynamically increase a volume, support distributed
journaling, and the addition of a virtual disk on the fly.
-----------------Virtual center licenses issues
However, all licensed functionality currently operating at the time the license
server
becomes unavailable continues to operate as follows:
?? All VirtualCenter licensed features continue to operate indefinitely, relying
on a
cached version of the license state. This includes not only basic VirtualCenter
operation, but licenses for VirtualCenter add-ons, such as VMotion and DRS.
?? For ESX Server licensed features, there is a 14-day grace period during which
hosts
continue operation, relying on a cached version of the license state, even acros
s
reboots. After the grace period expires, certain ESX Server operations, such as
powering on virtual machines, become unavailable.
During the ESX Server grace period, when the license server is unavailable, the
following operations are unaffected:
?? Virtual machines continue to run. VI Clients can configure and operate virtua
l
machines.
?? ESX Server hosts continue to run. You can connect to any ESX Server host in t
he

VirtualCenter inventory for operation and maintenance. Connections to the


VirtualCenter Server remain. VI Clients can operate and maintain virtual
machines from their host even if the VirtualCenter Server connection is also los
t.
During the grace period, restricted operations include:
?? Adding ESX Server hosts to the VirtualCenter inventory. You cannot change
VirtualCenter agent licenses for hosts.
?? Adding or removing hosts from a cluster. You cannot change host membership fo
r
the current VMotion, HA, or DRS configuration.
?? Adding or removing license keys.
When the grace period has expired, cached license information is no longer store
d.
As a result, virtual machines can no longer be powered on. Running virtual machi
nes
continue to run but cannot be rebooted.
When the license server becomes available again, hosts reconnect to the license
server.
No rebooting or manual action is required to restore license availability. The g
race
period timer is reset whenever the license server becomes available again.
------------------By default, ESX has 22 different users and 31 groups.
VMware ESX Server, you have 4 Roles, by default

------------Vmware SAN paths


I had a question from a fellow blogger about the Fixed/Most Recently Used settin
g for a SAN s path policy. This was related to an IBM SVC, which was only supporte
d as an MRU setup at the moment, but as of ESX U3: IBM SAN Volume Controller
SVC
is now supported with Fixed Multipathing Policy as well as MRU Multipathing Pol
icy. (although the SAN guide still says it s not )
We can have a long discussion about this, but it s plain and simple:
On an Active/Passive array you would need to set the path policy to Most Recently
Used .
An Active/Active array must have the path policy set to Fixed .
Now I always wondered why there was a difference in these path policies. There p
robably are a couple of explanations but the most obvious one is:
MRU fails over to an alternative path when any of the following SCSI sense codes
NOT_READY, ILLEGAL_REQUEST, NO_CONNECT and SP_HUNG are received. Keep in mind t
hat MRU doesn t failback.
For Active/Active SAN s, the Fixed path policy a fail over only occurs when the SC
SI sense code NO_CONNECT is received. When the path returns a fail back will occur
.
As you can see, four against just one SCSI sense code. You can imagine what happ
ens if you change MRU to Fixed when it s not supported by the array. SCSI sense co
des will be send out, but ESX isn t expecting them and will not do a path fail ove
r.

---------------One more, what is the maximum swap size we can allocate for an esx host..Ans:160
0mb as,a maximum of only 800mb of RAM can be allocated for COS/SC..Hence twice t
he size of COS/SC = Swap Size..
---------------------------enable VMotion via the command-line changed. So for anyone looking for this part
icular command:
/usr/bin/vmware-vim-cmd "hostsvc/vmotion/vnic_set vmk0"
----------------------HA best practices
Your ESX host-names should be in lowercase and use fqdn s
Provide Service Console redundancy
If you add an isolation validation address with das.isolationaddress , add an addit
ional 5000 to das.failuredetectiontime
If your Service Console network is setup with active / standby redundancy then you
r das.failuredetectiontime needs to be set to 60000
If you ensured Service Console redundancy by adding a secondary service console
then das.failuredetectiontime needs to be set to 20000 and you need to setup an ad
ditional das.isolationaddress
If you setup a secondary Service Console use a different subnet and vSwitch then
your primary has
If you don t want to use your default gateway as an isolation validation address o
r can t use it because it s a non-pingable device then disable the usage by setting
das.usedefaultisolationaddress to false and add a pingable das.isolationaddress
Change default isolation response to power off vm and set restart priorities for y
our AD/DNS/VC/SQL servers
-----------------------------In this example vmk0? is the first vmkernel. This is one of the things that chang
ed, so no portgroup id s anymore. And if you need to do anything via the command-l
ine that doesn t seem to be possible with the normal commands: vmware-vim-cmd. Def
initely the way to go.
------------to see how many virtual machines running on ESX server, the command is
vmware-cmd -l
--------------The primary difference between the NAS and SAN is at the communication level. NA
S communicates over the network using a network share, while SAN primarily uses
the Fiber Channel protocol.
NAS devices transfer data from storage device to server in the form of files. NA
S units use file systems, which are managed independently.These devices manage f
ile systems and user authentication.
Recommended limit of 16 ESX servers per VMFS volume,
based on limitations of a VirtualCenter-managed ESX setup.

Recommended maximum of 32 IO-intensive VMs sharing a


VMFS volume.
Up to 100 non-IO-intensive VMs can share a single VMFS
volume with acceptable performance.
No more than 255 files per VMFS partition.
Up to 2TB limit per physical extent of a VMFS volume.
When ESX is booted, it scans fiber and SCSI devices for new and
existing LUNs. You can manually initiate a scan through the VMware
Management Interface or by using the cos-rescan.sh command.
VMware recommends using cos-rescan.sh because it is easier to use
with certain Fibre Channel adapters than with vmkfstools.

Detecting High Number and missing LUNs


ESX Server, by default, only scans for LUN 0 to LUN 7 for every
target. If you are using LUN numbers larger than 7, you will need to
change the setting for the DiskMaxLUN field from the default of 8
to the value that you need.
VMware recommends increasing the maximum
queue depth for the Fiber Channel adapters. This change is
done by editing the hwconfig file in /etc/vmware directory.A typical
/etc/vmware/hwconfig file contains lines similar to the following:
HBA Settings for Failover of QLogic Adapters
For QLogic cards, VMware suggests that you adjust the
PortDownRetryCount value in the QLogic BIOS. This value determines
how quickly a failover occurs when a link goes down.
you can also use the command vmkmultipath
in the service console to view and change multipath settings.
To view the current multipathing configuration, use the -q
switch with the command.
vmkmultipath
Using the -r switch will allow you to specify the preferred path to a
disk. syntax is
vmkmultipath s <disk> -r <NewPath>
# vmkmultipath -s vmhba1:0:1 -r vmhba2:0:1
Ensure that your
policy is set to fixed by setting the path policy using the p switch
with the command.
vmkmultipath s <disk> -p <policy>
vmkmultipath -s vmhba1:0:1 -p fixed
However, VMware suggests that you run the vmkmultipath
command with the S switch, as root, to ensure that they are saved.
# /usr/sbin/vmkmultipath -S
VMFS Volumes
when using commands like df, you will not see the
/vfms directory. Instead, you need to use vdf, which reports all of the
normal df information plus information about the VMFS volumes.

ESX configuration, the following services are running and


generating traffic over eth0:
ESX MUI
ssh sessions to COS
Remote Console connections to guest operating systems
VirtualCenter communication to vmware-ccagent on host
Monitoring agents running on the COS
Backups that occur on the COS
the location of nic information
For Intel Adapters:
/proc/net/PRO_LAN_Adapters/eth0.info
For Broadcom Adapters:
/proc/net/nicinfo/eth0.info
The /etc/modules.conf file allows for the
manual configuration of the speed and duplex settings for eth0.
If you notice slow speeds or disconnected sessions
to your ESX console, the following command may be run to
determine your current speed and duplex configuration:
# mii-tool
eth0: 100 Mbit, half duplex, link ok
Refer to the table below to
determine which file is required to modify a specific setting.
150 VMware ESX Server
/etc/sysconfig/network-scripts/eth0/ifcfg-eth0
IP Address
Subnet Mask
/etc/resolv.conf
Search Suffix
DNS Servers
/etc/sysconfig/network
Hostname
Default Gateway
ESX provides several tools that can be used to monitor the utilization.
Vmkusage is an excellent tool for graphing historical data in
regards to VMNIC performance.
The second tool that can be utilized for performance monitoring is
esxtop.
Once the bond is configured with the
proper VMNICs, a virtual switch needs to be defined in
/etc/vmware/netmap.conf that references this bond.
The device variable references a pre-defined
bond configuration from the /etc/vmware/hwconfig file.
network.0.name = VSwitch1
network.0.device = bond0
network.1.name = VSwitch2
network.1.device = bond1
network.2.name = VMotion
network.2.device = bond2

By combining the two configuration files we have 3 virtual switches. In the abov
e example, we have defined two virtual switches for use by virtual machines and
a third for the specific purpose of utilizing VMotion in the virtual infrastruct
ure. Manually modifying the above files will NOT automatically activate new virt
ual switches. If a virtual
switch is not created using the MUI, the vmware-serverd (or vmware-ccagent if yo
u are using VirtualCenter) process must be restarted.
Since private virtual switches do not need to map back to VMNICs, there is no ne
ed to touch the /etc/vmware/hwconfig file.
We can add two simple lines to /etc/vmware/netmap.conf to create a new private v
irtual switch:
network3.name = Private Switch 1
network3.device = vmnet_0
network4.name = Private Switch 2
network4.device = vmnet_1
--We can easily configure the gigabit connection to be home link for the virtual swi
tch. Upon failure of the home link, the backup link will automatically activate
and handle the virtual machine traffic until the issue with the high speed conne
ction can be resolved. When performing this failover, ESX utilizes the same meth
odology as MAC Address load balancing of instantly re-ARPing the virtual MAC add
resses for the virtual machines down the alternative path. To make this configur
ation, add the following line to /etc/vmware/hwconfig: nicteam.bond0.home_link =
vmnic1
-----------IP Address
The second method that ESX is capable of providing for load balancing is based o
n destination IP address. Since outgoing virtual machine traffic is balanced bas
ed on the destination IP address of the packet, this method provides a much more
balanced configuration than MAC Address based balancing. Like the previous meth
od, if a link failure
is detected by the VMkernel, there will be no impact to the connectivity of the
virtual machines. The downside of utilizing this method of load balancing is tha
t it requires additional configuration of the physical network equipment.
Because of the way the outgoing traffic traverses the network in an
IP Address load balancing configuration, the MAC addresses of the
virtual NICs will be seen by multiple switch ports. In order to get
around this issue , either EtherChannel (assuming Cisco switches are
utilized) or 802.3ad (LACP - Link Aggregation Control Protocol)
must be configured on the physical switches. Without this configuration,
the duplicate MAC address will cause switching issues.
In addition to requiring physical switch configuration changes, an
ESX server configuration change is required. There is a single line
that needs to be added to /etc/vmware/hwconfig for each virtual
switch that you wish to enable IP Address load balancing on. To make
this change, use your favorite text editor and add the line below to
/etc/vmware/hwconfig. You will need to utilize the same configuration
file to determine the name of the bond that you need to reference
in the following entry (replace bond0 with the proper value):
nicteam.bond0.load_balance_mode = out-ip
--------------------

The following steps assume no virtual machines have been configured


on the ESX host:
1. Modify /etc/modules.conf to comment out the line that
begins with alias eth0 . This will disable eth0 on the next
reboot of the ESX host.
2. Run vmkpcidivy i at the console. Walk through the current
configuration. When you get to the network adapter that is
assigned to the Console (c), make sure to change it to Virtual
Machines (v). This should be the only value that changes
from running vmkpcidivy.
3. Modify /etc/vmware/hwconfig by reconfiguring the network
bonds. Remove any line that begins with the following: (In
this case, X can be any numeric value.)
nicteam.vmnicX.team
Once the current bond has been deleted, add the following
two lines to the end of the file:
nicteam.vmnic0.team = bond0
nicteam.vmnic1.team = bond0
ESX Blade
VMotion
COS
Virtual Machines
4. Modify /etc/vmware/netmap.conf to remove and recreate the
required virtual switches. Since we are working under the
assumption that this is a new server configuration, remove
any lines that exist in this file and add the following 4 lines:
network0.name = Production
network0.device = bond0
network1.name = VMotion
network1.device = vmnic2
This will establish two new virtual switches once the system
reboots. The first virtual switch will consist of VMNIC0 and
VMNIC1 and will be utilized for virtual machines. The second
virtual switch consists of only VMNIC2 and is dedicated
for VMotion traffic.
5. Modify the /etc/rc.d/rc.local file to properly utilize the
vmxnet_console driver to utilize a bond for eth0. Add the following
2 lines at the end of /etc/rc.d/rc.local:
insmod vmxnet_console devName= bond0
ifup eth0
6. Reboot the server.
When the server comes back online, we will have a pretty advanced
network configuration. The COS will actually be utilizing a redundant
bond of two NICs as eth0 through the vmxnet_console driver.
Virtual machines will utilize the same bond through a virtual switch
within ESX and have the same redundancy as eth0. VMotion will
have the dedicated NIC that VMware recommends for optimal performance.
Advantages of using the vmxnet_console Driver
Redundant eth0 connection.
VMotion traffic will not impact virtual machine performance.
Disadvantages of using the vmxnet_console Driver
Difficult configuration that is difficult to troubleshoot.
eth0 must reside on the same VLAN as the virtual machines.
------------------------

What's New
With this release of VMware Infrastructure 3, VMware innovations reinforce three
driving factors of virtualization adoption that continue to make VMware Infrast
ructure the virtualization platform of choice for datacenters of all sizes and a
cross all industries:
Effective Datacenter Management
Mainframe-class Reliability and Availability
Platform for any Operating System, Application, or Hardware
The download bundle available for ESX Server 3.5 from the VMware Web site is an
update from the original release download bundle. The updated download bundle fi
xes an issue that might occur when upgrading from ESX Server 3.0.1 or ESX Server
3.0.2 to ESX Server 3.5. The updated download bundle and the original download
bundle released on 12/10/2007 are identical with the exception of a modification
to the upgrade script that automates the installation of an RPM. For informatio
n about manually installing an RPM for upgrading ESX Server, refer to KB 1003801
.
Effective Datacenter Management
Guided Consolidation Guided Consolidation, an enhancement to VMware VirtualCenter,
guides new virtualization users through the consolidation process in a wizard-b
ased, tutorial-like fashion. Guided Consolidation leverages capacity planning ca
pabilities to discover physical systems and analyze them. Integrated conversion
functionality transforms these physical systems into virtual machines and intell
igently places them on the most appropriate VMware ESX Server hosts.
VMware Converter Enterprise integration VirtualCenter 2.5 provides support for int
egrated Physical-to-Virtual (P2V) and Virtual-to-Virtual (V2V) migrations. This
supports scheduled and scripted conversions, Microsoft Windows Vista conversions
, and restoration of virtual disk images that are backed up using VCB, from with
in the VI Client.
VMware Distributed Power Management (experimental) VMware DPM reduces power consum
ption by intelligently balancing a datacenter's workload. VMware DPM, which is p
art of VMware Distributed Resource Scheduler, automatically powers off servers w
hose resources are not immediately required and returns power to these servers w
hen the demand for compute resources increases again.
Image Customization for 64-bit guest operating systems Image customization provide
s administrators with the ability to customize the identity and network settings
of a virtual machine's guest operating system during virtual machine deployment
from templates.
Provisioning across datacenters VirtualCenter 2.5 allows you to provision virtual
machines across datacenters. As a result, VMware Infrastructure administrators c
an now clone a virtual machine on one datacenter to another datacenter.
Batch installation of VMware Tools VirtualCenter 2.5 provides support for batch in
stallations of VMware Tools so that VMware Tools can now be updated for selected
groups of virtual machines.
Datastore browser This release of VMware Infrastructure 3 supports file sharing ac
ross ESX Server hosts (ESX Server 3.5 or ESX Server 3i) managed by the same Virt
ualCenter Server.
Open Virtual Machine Format (OVF) The Open Virtual Machine Format (OVF) is a virtu
al machine distribution format that supports sharing of virtual machines between
products and organizations. VMware Infrastructure Client version 2.5 allows you
to import and generate virtual machines in OVF format through the File > Virtua
l Appliance > Import/Export menu items.
NEW: Lab Manager 2.5.2 Support ESX Server 3 version 3.5 hosts can be used with VMw
are Lab Manager 2.5.2. ESX Server 3.0.x hosts managed by VirtualCenter 2.5 are a
lso supported in Lab Manager 2.5.2. However, hosts used in Lab Manager 2.5.2 mus
t be of the same type.
Mainframe-class Reliability and Availability
VMware Storage VMotion Storage VMotion allows IT administrators to minimize servic
e disruption due to planned storage downtime previously incurred for rebalancing

or retiring storage arrays. Storage VMotion simplifies array migration and upgr
ade tasks and reduces I/O bottlenecks by moving virtual machines to the best ava
ilable storage resource in your environment.
Migrations using Storage VMotion must be administered through the Remote Command
Line Interface (Remote CLI), which is available for download at the following l
ocation: http://www.vmware.com/download/download.do?downloadGroup=VI-RCLI.
VMware Update Manager Update Manager automates patch and update management for ESX
Server hosts and select Microsoft and Linux virtual machines.
VMware High Availability enhancements Enhanced high availability provides experime
ntal support for monitoring individual virtual machine failures. VMware HA can n
ow be set up to either restart the failed virtual machine or send a notification
to the administrator.
VMware VMotion with local swap files VMware Infrastructure 3 now allows swap files
to be stored on local storage while still facilitating VMotion migrations for t
hese virtual machines.
VMware Consolidated Backup (VCB) enhancements The VMware Consolidated Backup (VCB)
framework has the following enhancements:
iSCSI Storage The VCB framework now supports backing up of virtual machines direct
ly from iSCSI storage. Previously, the VCB proxy was only supported with Fibre C
hannel SAN storage.
Integration in Converter VMware Converter has the ability to restore virtual machi
ne image backups created through the VCB framework. Now that Converter is integr
ated into VirtualCenter 2.5, you can perform image restores directly through the
VI Client.
For details on many improvements and enhancements that this release of Consolida
ted Backup offers, see the VMware Consolidated Backup 1.1 Release Notes.
Platform for any Operating System, Application, or Hardware
Management of up to 200 hosts and 2000 virtual machines VirtualCenter 2.5 can mana
ge many more hosts and virtual machines than previous releases, scaling the mana
geability of the virtual datacenter to up to 200 hosts and 2000 virtual machines
.
Large memory support for both ESX Server hosts and virtual machines ESX Server 3.5
supports 256GB of physical memory and virtual machines with 64GB of RAM.
ESX Server host support for up to 32 logical processors ESX Server 3.5 fully supp
orts systems with up to 32 logical processors. Systems with up to 64 logical pro
cessors are supported experimentally.
SATA support ESX Server 3.5 supports selected SATA devices connected to dual SAS/S
ATA controllers.
10 Gigabit Ethernet support Neterion and NetXen 10 Gigabit Ethernet NIC cards are
supported in ESX Server 3.5.
N-Port ID Virtualization (NPIV) support ESX Server 3.5 introduces support for NPIV
for Fibre Channel SANs. Each virtual machine can now have its own World Wide Po
rt Name (WWPN).
Cisco Discovery Protocol (CDP) support This release of VMware Infrastructure 3 inc
orporates support for CDP to help IT administrators better troubleshoot and moni
tor Cisco-based environments from within VirtualCenter 2.5 and the VI Client. CD
P allows VMware Infrastructure administrators to know which Cisco switch port is
connected to each virtual switch uplink (that is, each physical NIC).
NEW: NetFlow support (experimental) NetFlow is a networking tool with multiple use
s, including network monitoring and profiling, billing, intrusion detection and
prevention, networking forensics, and Sarbanes-Oxley compliance.
NEW: Internet Protocol Version 6 (IPv6) support for virtual machines ESX Server 3
version 3.5 supports virtual machines configured for IPv6.
Paravirtualized guest operating system support with VMI 3.0 ESX Server 3.5 support
s paravirtualized guest operating systems that conform to the VMware Virtual Mac
hine Interface (VMI) 3.0. VMI is an open paravirtualization interface developed
by VMware in collaboration with the Linux community (VMI was integrated into the

mainline Linux kernel in version 2.6.22).


Large page size In ESX Server 3.5, the VMkernel can now allocate 2MB pages to the
guest operating system.
Enhanced VMXNET Enhanced VMXNET is the next version of VMware's paravirtualized vi
rtual networking device for guest operating systems. Enhanced VMXNET includes se
veral new networking I/O performance improvements including support for TCP Segm
entation Offload (TSO) and jumbo frames.
TCP Segmentation Offload (TSO) TCP Segmentation Offload (TSO) improves networking
I/O performance by reducing the CPU overhead involved with sending large amounts
of TCP traffic.
Jumbo frames Jumbo frames allow ESX Server 3.5 to send larger frames out onto the
physical network. The network must support jumbo frames (end-to-end) for jumbo f
rames to be effective.
NetQueue support VMware supports NetQueue, a performance technology that significa
ntly improves performance in 10 Gigabit Ethernet virtual environments.
Intel I/O Acceleration Technology (IOATv1) support (experimental) ESX Server 3.5 p
rovides experimental support for IOATv1.
InfiniBand As a result of the VMware Community Source co-development effort with M
ellanox Technologies, ESX Server 3.5 is compatible with InfiniBand Host Channel
Adapters (HCAs) from Mellanox Technologies. Support for this feature is provided
by Mellanox Technologies as part of the VMware Third Party Hardware and Softwar
e Support Policy.
Round-robin load balancing (experimental) ESX Server 3.5 enhances native load bala
ncing by providing experimental support for round-robin load balancing of HBAs.

---------------------------------Vmware boot process


Lilo
Vmkernal
INit /etc/inittab
/etc/rc.d/rc3.d will have the symbolic links from /etc/init.d
S00vmkstart--actually links to a script called
vmkhalt. By running this script first, VMware ensures that there are
no VMkernel processes running on the system during the boot
process.
S10network --tcp/ip services
S12syslog --syslog daemon
S56xinetd --incoming request to handle from COS,Each application that can be sta
rted by
xinetd has a configuration file in /etc/xinetd.d. If the disable = no
flag is set in the configuration file of a particular application then
xinetd starts the application.The most
important application that is started here is the vmware-authd application
which provides a way to connect and authenticate to ESX to
perform VMkernel modifications.
S90vmware
This is where the VMkernel finally begins to load. The first thing that
the VMkernel does when it starts is load the proper device drivers to
interact with the physical hardware of the host. You can view all the
drivers that the VMkernel may utilize by looking in the

/usr/lib/vmware/vmkmod directory.
Once the VMkernel has successfully loaded the proper hardware
drivers it starts to run its various support scripts:
The vmklogger sends messages to the syslog daemon and generates
logs the entire time the VMkernel is running.
The vmkdump script saves any existing VMkernel dump files
from the VMcore dump partition and prepares the partition
in the errors.
Next the VMFS partitions (the partitions used to store all of your VM
disk files) are mounted. The VMkernel simply scans the SCSI devices
of the system and then automatically mounts any partition that is configured
as VMFS. Once the VMFS partitions are mounted the
VMkernel is completely loaded and ready to start managing virtualmachines.
S91httpd.vmwareOne of the last steps of the boot process for the COS is to start
the
VMware MUI (the web interface for VMware management). At this
point the VMkernel has been loaded and is running. Starting the
MUI provides us with an interface used to graphically interact with
ESX. Once the MUI is loaded a display plugged into the host s local
console will display a message stating everything is properly loaded
and you can now access your ESX host from a web browser.

Modifying device
allocations through the service console can be done
with the vmkpcidivy command.
vmkpcidivy i
esxcfg-vswif command will allow the modification
of this vSwitch port from the COS command line
http://www.applicationdelivery.co.uk/blog/tag/vmdk-limits/
While this may seem limited, the tool used is actually quite powerful.
VMware provides the nfshaper module, which allows the VMkernel
to control the outgoing bandwidth on a per guest basis.

------------------------VMware ESX Server performance monitoring is installed by default, but it is not


activated. It can be activated with command "vmkusagectl install". If you want t
o cleanup the statistics, you will have to uninstall the monitoring service with
command "vmkusagectl uninstall". You can then clean the database with command "
vmkusagectl cleandb", and (re)activate/(re)install it again. The excellent monit
oring web pages, created with rrdtool, are at address "http://esx-server-name/vm
kusage/".

Command "esxtop" is like the "top" command, but it shows the VMkernel processes
instead.
"vdf" tool displays the amount of free space on different volumes, including VMF
S volumes.
ESX netcard utility for locating correct netcard among many netcards is "findnic
".
Some other commands are: "vmfs-ftp", "vmkmultipath", "vmklogger", "vmware-cmd",
"vm-support".
There are a couple of files and directories you should know about. The most impo
rtant ones are listed below.
/etc/modules.conf This file contains a list of devices in the system available
to the Service Console. Usually the devices allocated solely to VMs, but physica
lly existing on the system are also shown here in the commented-out ("#") lines.
This is an important file for root and administrators.
/etc/fstab This file defines the local and remote filesystems which are mounted
at ESX Server boot.
/etc/rc.d/rc.local This file is for server local customisations required at the
server bootup. Potential additions to this file are public/shared vmfs mounts.
/etc/syslog.conf
This file configures what things are logged and where. Some examples are given b
elow:
*.crit
/dev/tty12
This example logs all log items at level "crit" (critical) or higher to the virt
ual terminal at tty12. You can see this log by pressing [Alt]-[F12] on the conso
le.
*.=err
/dev/tty11
This example logs all log items at exactly level "err" (error) to the virtual te
rminal at tty11. You can see this log by pressing [Alt]-[F11] on the console.
*.=warning
/dev/tty10
This example logs all log items at exactly level "warning" to the virtual termin
al at tty10. You can see this log by pressing [Alt]-[F10] on the console.
*.*
192.168.31.3
This example forwards everything (all syslog entries) to another (central) syslo
g server. Pay attention to that server's security.
/etc/logrotate.conf
This is the main configuration file for log file rotation program. It defines th
e defaults for log file rotation, log file compression, and time to keep the old
log files. Processing the contents of /etc/logrotate.d/ directory is also defin
ed here.
/etc/logrotate.d/
This directory contains instructions service by service for log file rotation, l
og file compression, and time to keep the old log files. For the three vmk* file

s, raise "250k" to "4096k", and enable compression.


/etc/inittab
Here you can change the amount of virtual terminals available on the Service Con
sole. Default is 6, but you can go up to 9. I always go :-)
/etc/bashrc
The system default $PS1 is defined here. It is a good idea to change "\W" to "\w
" here to always see the full path while logged on the Service Console. This is
one of my favourites.
/etc/profile.d/colorls.sh
Command "ls" is aliased to "ls --colortty" here. Many admins don't like this col
ouring. You can comment-out ("#") this line. I always do this one, too.
/etc/init.d/
This directory contains the actual start-up scripts.
/etc/rc3.d/
This directory contains the K(ill) and S(tart) scripts for the default runlevel
3. The services starting with "S" are started on this runlevel, and the services
Starting with "K" are killed, i.e. not started..
/var/log/
This directory contains all the log files. VMware's log files start with letters
"vm". The general main log file is "messages".
/etc/ssh/
This directory contains all the SSH daemon configuration files, public and publi
c keys. The defaults are both secure and flexible and rarely need any changing.
/etc/vmware/
This directory contains the most important vmkernel configuration files.
/etc/vmware/vm-list
A file containing a list of registered VMs on this ESX Server.
/etc/xinetd.conf
This is the main and defaults setting configuration file for xinet daemon. Proce
ssing the contents of /etc/xinetd.d/ directory is also defined here.
/etc/xinetd.d/
This directory contains instructions service by service for if and how to start
the service. Of the services here, vmware-authd, wu-ftpd, and telnet are most in
teresting to us.
Two of the most interesting parameter lines are "bind =" and "only_from =", whic
h allows limiting service usage.
/etc/ntp.conf
This file configures the NTP daemon. Usable public NTP servers in Finland are fi
.pool.ntp.org, elsewhere in Europe europe.pool.ntp.org. You should always place
two to four NTP servers to ntp.conf file. Due to the nature of *.pool.ntp.org, y
ou should just have the same line four times in the configuration file. Check ww
w.pool.ntp.org for a public NTP server close to you. Remember to change the serv
ice to autostart at runlevel 3.

--------------------

22/tcp
SSH daemon listens to this port for remote connections. By default password auth
entication is used for logons. RSA/DSA public/private key authentication can be
used and it is actually tried first. Userid/password authentication is actually
tried second. For higher security and for automated/scripted logons RSA/DSA auth
entication must be used.
902/tcp
VMware authd, the web management UI (MUI) and remote console authentication daem
on (service) for VMware ESX Server uses this port. The daemon is not listening t
his port directly, but xinetd does. When someone open connection to port 902, xi
netd then launches authd, and the actual authentication starts. Xinetd-related a
uthd security is defined in the file /etc/xinetd.d/vmware-authd.
80/tcp and 443/tcp
The httpd.vmware application web server listens to these ports. With high securi
ty on, all connections to port 80 are automatically redirected to port 443.
8222/tcp and 8333/tcp
These ports are used by ESX Server's web UI. They are just forwards to ports 80
and 443 respectively. These ports do not need to be open on the firewalls.
Remember, that sshd is by default always running on the Service Console, so you
can always connect to it and do low level management directly to the Service Con
sole files. An example of this kind of management is when the MUI stops respondi
ng. Just login using your account via ssh, and enter the following command to re
start the webserver responsible for the MUI: su -c "/etc/init.d/httpd.vmware res
tart". You normally need the root's password to complete this task. You could (s
hould!) also use sudo/visudo to make thing even easier.
----------------------------DRS invocation intervel default time is 5 mins, we can change the value in vpxd.
cfg file
---------------CPU and memory share values, respectively, default to:
? High
2000 shares per virtual CPU and 20 shares per megabyte of virtual
machine memory
? Normal 1000 shares per virtual CPU and 10 shares per megabyte of virtual
machine memory
? Low 500 shares per virtual CPU and 5 shares per megabyte of virtual machine
memory
--------------remove vmnic from virtual switch via COS instead of MUI Aug 14, 2006 12:51 PM
you could try manually updating these 3 files
/etc/vmware/devnames.conf
/etc/vmware/hwconfig
/etc/vmware/netmap.conf
PLEASE TRY THIS ON A "TEST" SYSTEM
PLEASE BACK THEM UP PRIOR TO TRYING ANYTHING
---------------

service console issue


ok, do the following
Delete your vswif and vmknic interfaces by using the following commands
esxcfg-vswif -d vwswif0
esxcfg-vmknic -d vmk2
then delete your port groups
esxcfg-vswitch -D "VMKernel"
esxcfg-vswitch -D "Service Console"
Then delete your vswitches
esxcfg-vswitch -d vSwitch0
Now you should have a 'blank' networking config.
Now run the reset options for
esxcfg-vswitch -r
esxcfg-vmknic -r
esxcfg-vswif0 -r
run the listtings to see what you see.
Now to create everything again
Create the vswitches
esxcfg-vswitch -a vSwitch0
esxcfg-vswitch -a vSwitch1
create your port groups
esxcfg-vswitch -A "Service Console" vSwitch0
esxcfg-vswitch -A "VMKernel" vSwitch0
excfg-vswitch -A "VMWare Guests" vSwitch1
Now create the uplinks
esxcfg-vswitch -L vmnic0 vSwitch0
esxcfg-vswitch -L vmnic1 vSwitch1
If this all works with no issues, then run an esxcfg0vswitch -l to see what it l
ooks like.
Now recreate the vswif interface

esxcfg-vswif -a vswif0 -p "Service Console" -i 192.168.1.4 -n 255.255.255.0


Now recreate the vmkernel interface
esxcfg-vmknic -a "VMKernel" -i 192.168.1.5 -n 255.255.255.0
run esxcfg-vswitch -l to check out what you vswitch config looks like. Hopefully
every thing looks good.
Check to then see if you can ping your SC IP from another PC?
Hope this helps!
-------------------lsof-i
--------Service Console Configuration and Troubleshooting Commands
esxcfg-advcfg VMware ESX Server Advanced Configuration Option Tool
Provides an interface to view and change advanced options of the VMkernel.
esxcfg-boot VMware ESX Server Boot Configuration Tool
Provides an interface to view and change boot options, including updating initrd
and GRUB options.
esxcfg-configcheck VMware ESX Server Config Check Tool
Checks the configuration file for format updates.
esxcfg-info VMware ESX Server Info Tool
Used primarily for debugging, this command provides a view into the state of the
VMkernel and Service Console components.
esxcfg-module VMware ESX Server Advanced Configuration Option Tool
This command provides an interface to see which driver modules are loaded when t
he system boots, as well as the ability to disable or add additional modules.
esxcfg-pciid VMware ESX Server PCI ID Tool
This utility rescans the PCI ID list (/etc/vmware/pci.xml), and loads PCI identi
fiers for hardware so the Service Console can recognize devices.
esxcfg-resgrp VMware ESX Server Resource Group Manipulation Utility
Using this command, it is possible to create, delete, view, and modify resource
group parameters and configurations.
esxupdate VMware ESX Server Software Maintenance Tool
This command it used to query the patch status, as well as apply patches to an E
SX host. Only the root user can invoke this command.
vdf VMware ESX Disk Free Command
As df works in Linux, vdf works in the Service Console. The df command will work
in the Service Console, but will not show free disk space on VMFS volumes.
vmkchdev VMware ESX Change Device Allocation Tool
This tool can assign devices to either the Service Console or VMkernel, as well
as list whether a device is assigned to the SC or VMkernel. This replaced the vm
kpcidivy command found in previous versions of VMware ESX.
vmkdump VMkernel Dumper
This command manages the VMkernel dump partition. It is primarily used to copy t
he contents of the VMkernel dump partition to a usable file for troubleshooting.
vmkerrcode VMkernel Status Return Code Display Utility
This command will decipher VMkernel error codes along with their descriptions.
vmkfstools VMware ESX Server File System Management Tool
This utility is used to create and manipulate VMFS file systems, physical storag
e devices on an ESX host, logical volumes, and virtual disk files.

vmkiscsi-device VMware ESX iSCSI Device Tool


Used to query information about iSCSI devices.
vmkload_mod Vmkernel Module Loader
This application is used to load, unload, or list, device drivers and network sh
aper modules into the VMkernel.
vmkloader VMkernel Loader
This command loads or unloads the VMkernel.
vmkpcidivy VMware ESX Server Device Allocation Utility
This utility in previous versions of VMware ESX, allowed for the allocation of d
evices to either the Service Console, or the VMkernel. In VMware ESX 3.0, this u
tility is deprecated, and should only be used to query the host bus adapter allo
cations using the following: vmkpcidivy -q vmhba_devs
vmkuptime.pl Availability Report Generator
This PERL script creates HTML that displays uptime statistics and downtime stati
stics for a VMware ESX host.
vmware-hostd VMware ESX Server Host Agent
The vmware-hostd script acts as an agent for an ESX host and its virtual machine
s.
vmware-hostd-support VMware ESX Server Host Agent Crash Information Collector
This script collects information to help determine the state of the ESX host aft
er a hostd crash.
Networking and Storage Commands
esxcfg-firewall VMware ESX Server Firewall Configuration Tool
Provides an interface to view and change the settings of Service Console firewal
l.
esxcfg-hwiscsi VMware ESX Server Hardware iSCSI Configuration Tool
Provides an interface to allow or deny ARP redirection on a hardware iSCSI adapt
er, as well as enable or disable jumbo frames support.
esxcfg-linuxnet No specific name
This command is only used when troubleshooting VMware ESX. It allows the setting
s of the vswif0 (virtual NIC for the Service Console under normal operation), to
be passed to the eth0 interface when booting without loading the VMkernel. With
out the VMkernel loading, the vswif0 interface is not available.
esxcfg-mpath VMware ESX Server multipathing information
This command allows for the configuration of multipath settings for Fibre Channe
l and iSCSI LUNs.
esxcfg-nas VMware ESX Server NAS configuration tool
This command is an interface to manipulate NAS files systems that VMware ESX see
s.
esxcfg-nics VMware ESX Server Physical NIC Information
This command shows information about the Physical NICs that the VMkernel is usin
g.
esxcfg-rescan VMware ESX Server HBA Scanning Utility
This command initiates a scan of a specific host bus adapter device.
esxcfg-route VMware ESX Server VMkernel IP Stack Default Route Management
Tool
This can set the default route for a VMkernel virtual network adapter (vmknic).
esxcfg-swiscsi VMware ESX Server Software iSCSI Configuration Tool
The command line interface for configuring software based iSCSI connections.
esxcfg-vmhbadevs VMware ESX Server SCSI HBA Tool
Utility to view LUN information for SCSI host bus adapters configured in VMware
ESX.
esxcfg-vmknic VMware ESX Server VMkernel NIC Configuration Tool
Configuration utility for managing the VMkernel virtual network adapter(vmknic).
esxcfg-vswif VMware ESX Server Service Console NIC Configuration Tool
Configuration utility for managing the Service Console virtual network adapter (
vswif).
esxcfg-vswitch VMware ESX Server Virtual Switch Configuration Tool
Configuration utility for managing virtual switches and settings.

esxnet-support VMware ESX Server Network Support Script.


This script is used to perform a diagnostic analysis of the Service Console and
VMkernel s network connections and settings.
vmkping Vmkernel Ping
Used to ping the VMkernel virtual adapter (vmknic)
vmkiscsi-ls VMware ESX iSCSI Target List Tool
This command shows all iSCSI Targets that the iSCSI subsystem knows about, inclu
ding Target name, Target ID, session status, host number, bus number, and more.
vmkiscsi-tool VMware ESX iSCSI Tool
This command will show the properties of iSCSI initiators.
vmkiscsi-util VMware ESX iSCSI Utility
This command will display LUN Mapping, Target Mapping, and Target Details.
VMware Consolidated Backup Commands
vcbMounter VMware Consolidated Backup Virtual Machine Mount Utility
This utility is used to mount a virtual machine s virtual disk file for the purpos
e
of backing up its contents.
vcbResAll VMware Consolidated Backup Virtual Machine Restore Utility
This utility is used to restore multiple virtual machines virtual disk files.
vcbRestore VMware Consolidated Backup Virtual Machine Restore Utility
This utility is used to restore a single virtual machine s virtual disk files
vcbSnapAll VMware Consolidated Backup Virtual Machine Mount Utility
This utility is used to backup one or more virtual machines virtual disk files.
vcbSnapshot VMware Consolidated Backup Snapshot Utility
This utility is used to backup a virtual machine s virtual disk files.
vcbUtil VMware Consolidated Backup Resource Browser and Server Ping
This utility provides different information, depending on argument. The ping arg
ument attempts to log into the VirtualCenter Server. The resource pools argument
lists all resource pools, and the vmfolders argument lists foldersthat
contain virtual machines.
vcbVmName VMware Consolidated Backup VM Locator Utility
This utility performs a search of virtual machines for VCB scripting. It can lis
t individual VM s, all VM s that meet a certain criteria, or VM s on a specificESX
host.
vmres.pl Virtual Machine Restore Utility
This PERL script is depreciated, and vcbRestore should be used instead.
vmsnap.pl Virtual Machine Mount Utility
This PERL script is depreciated, and vcbMounter should be used instead.
vmsnap_all Virtual Machine Mount Utility
This script is depreciated, and vcbSnapAll should be used instead.
?-----------------------------VMkernel Related logging
/var/log/vmkernel Keeps information about the host and guests
/var/log/vmkwarning Collects VMkernel warning messages
/var/log/vmksummary Collects statistics for uptime information
Host Agent logging
/var/log/vmware/hostd.log Information on the agent and configuration of
an ESX host
Service Console logging
/var/log/messages Contain general log messages for troubleshooting. This
also keeps track of any users that have logged into the Service Console, and
their actions.
Web Access logging
/var/log/vmware/webAccess Web access logging for VMware ESX
Authentication logging
var/log/secure Records all authentication requests
VirtualCenter agent logging
/var/log/vmware/vpx Logs for the VirtualCenter Agent

Virtual Machine logging Look for a file named vmware.log in the directory
of the configuration files of a virtual machine.
----------------------ESX Memory Management Part 1
Apr 27th, 2009
by Arnim van Lieshout.
I receive a lot of questions lately about ESX memory management. Things that are
very obvious to me seem to be not so obvious at all for some other people. So I l
l try to explain these things from my point of view.
First let s have a look at the virtual machine settings available to us. On the vm
setting page we have several options we can configure for memory assignment.
1. Allocated memory: This is the amount of memory we assign to the vm and is
also the amount of memory the guest OS will see as its physical memory. This is
a hard limit and the vm cannot exceed this limit if it demands more memory. It i
s configured on the hardware tab of the vm s settings.
2. Reservations: A reservation is a guaranteed amount of memory assigned to t
he vm. This is a way of ensuring that the vm gets a minimal amount of memory ass
igned. When this reservation cannot be met, you will be unable to start the vm.
This is known as Admission Control . Reservations are set on the resources tab of t
he vm s settings and by default there is no reservation set.
3. Limits: A limit is a restriction on the vm, so it cannot use more memory t
han this limit. If you would set this limit lower than the allocated memory valu
e, the ballooning driver will start to inflate as soon as the vm demands more me
mory than this limit. Limits are set on the resources tab of the vm s settings and
by default the limit is set to unlimited .Now that we know of limits and reservati
ons, we need to have a quick look at the VMkernel swap file. This swap file is u
sed by the VMkernel to swap out the vm s memory as a last resort to free up memory
when the host is running out of it. When we set a reservation, that memory is g
uaranteed and cannot be swapped out to disk. So whenever a vm starts up, the VMk
ernel creates a swap file which has a size of the limit minus the reservation. F
or example we have a vm with a 1024MB limit and a reservation of 512MB. The swap
file created will be 1024MB
512MB = 512MB. If we would set the reservation to 1
024MB there won t be a swap file created at all. Remember that by default there ar
e no reservations and no limits set, so the swap file created for each vm will b
e the same size as the allocated memory.
4. Shares: With shares you set a relative importance on a vm. Unlike limits a
nd reservation which are fixed, shares can change dynamically. Remember that the
share system only comes into play when memory resources are scarce and contenti
on is occurring. Shares are set on the resources tab of the vm s settings and can
be set to low , normal , high or a custom value.
low = 5 shares per 1MB allocated to the vm
normal = 10 shares per 1MB allocated to the vm
high = 20 shares per 1MB allocated to the vm
It is important to note that the more memory you assign to a vm, the more
shares it receives.Let s look at an example to show you how this share system work
s. Say you have 5 vms with each 2,000MB memory allocated and the share value set
to normal . The ESX host only has 4,000MB of physical machine memory available for
virtual machines. Each vm receives 20,000 shares according to the normal setting
(10 * 2,000). The sum of all shares is 5 * 20,000 = 100,000. Every vm will recei
ve an equal share of 20,000/100,000 = 1/5th of the resources available = 4,000/5
= 800MB.Now we change the shares setting on 1 vm to High , which results in this v
m receiving 40,000 shares instead of 20,000. The sum of all shares is now increa
sed to 120,000. This vm will receive 40,000/120,000 = 1/3rd of the resources ava
ilable. Thus 4,000/3 = 1333 MB. All the other vms will receive only 20,000/120,0
00 = 1/6th of the available resources = 4,000/6 = 666 MB

Instead of configuring these settings on a vm basis, it is also possible to conf


igure these settings on a resource pool. A VMware ESX resource pool is a pool of
CPU and memory resources. I always look to the resource pool as a group of VMs.
This concludes the memory settings we can configure on a vm. Next time I will go
into ESX memory management techniques.
----------http://vm-where.com/links.aspx

You might also like