Category Archives: VMWARE

How to remove a vCenter Server that no longer exists from a Platform Services Controller.

I recently ran into a situation where a customer had removed a vcenter server from their environment and built another but had not decommissioned the first from the PSC. When they were trying to view some screens that showed data across the environment, the interface would time out looking for that vCenter.
There is a VMWare KB that talks about the process to follow in decommissioning a vCenter server that needs to be removed from the PSC. That process only succeeds if the vCenter server is still available.

Given that the fqdn of the vCenter server was VC1.corp.local

  • SSH to the PSC.
  • CD to /usr/lib/vmware-vmdir/bin/
  • Run the command ./vdcleavefed -h VC1.corp.local  -u administrator
  • Enter the administrator@vsphere.local password:
  • If the command is successful, you are prompted with:
    • ‘ vdcleavefd offline for server VC1.corp.local Leave federation cleanup done’

Some common errors:
“Leave federation cleanup failed. Error[13] – Confidentiality required.”
Check the FQDN or try using the IP address in its place

“Error (9234) – User invalid credential”
Check the username and password. When I used the domain qualified user administrator@vsphere.local, I received this error. Using only administrator with the password for administrator@vsphere.local worked for me.

If you run into any unexpected errors the log can be found at


Leave a comment

Filed under VMWARE

VCAP5-DCA Objective 2.1 – Implement and Manage Complex Virtual Networks – Configuring SNMP

SNMP has been made more strait forward in vCLI5.1. We can now make all configuration changes in ESXCLI.

The namespace to set the SNMP server settings are under esxcli system snmp.

We can use the set get and test methods, but the methods are strait forward.

Leave a comment

Filed under VCAP5-DCA, VMWARE

VCAP5-DCA – – Implement Complex Storage Solutions – Skills and Abilities – Determine use cases for and configure VMware DirectPath I/O

DirectPath I/O is available from vSphere 4.0 forward. It uses Intel VT-d and AMD-Vi to allow guests to directly access hardware devices. When using DirectPath a VM can’t use vMotion, snapshots or any feature which leverage them.  The directly allocated device isn’t available to other VMs. The only change with vSphere5 is that you can vMotion a VM provided it is in a supported hardware configuration that allows that direct path to the host hardware to persist.

The benefit of DirectPath is very specific. You can offload CPU cycles that the VMM would need to perform to schedule those resources and expose hardware component features that are not yet available through virtualized drivers or that ESXi cannot recognize. There is a performance benefit available in some cases.

The tradeoff is to nearly nullify the benefit of agility inherent in virtualization, effectively tying a guest to a host.

There is a KB describing the configuration process.

Leave a comment

Filed under VCAP5-DCA

My notes for the VCAP5-DCA exam

I am starting to put together my notes for the VCAP5-DCA exam.

Some of this will be restated information from other sources, and I’ll try to site references where possible, but because I’m really just collecting my own thoughts here, your mileage may vary.

My Plan:

The plan is to review all of the information in the Blueprint

I will post a description of each topic and notes about how to affect the topic. My goal is to have identified places where I rely heavily on documentation and would be  hindered in the exam.

After I note each category I will take some highlights and recreate them in a video. This is new territory for me so I may not be quick to get these posted.

Let’s see how long this takes me to really get done.

Leave a comment

Filed under VCAP5-DCA

Less expensive VMWare FT and HA for small business

Getting HA (High Availability) and FT (Fault Tolerance) in your VMWare environment without an Enterprise SAN…

The VMWare ESX environment opens up a lot of options for a company looking to replace servers. By adding ESX(i) into the scenario you are able to potentially save a lot of money and open up a lot of options that you wouldn’t have otherwise had. The idea of easily recovering a failed domain controller to new hardware, or reverting to a snapshot of a server that was taken before that last Windows update was installed is pretty compelling.

There are some complications in a small to medium sized data center.

What happens if your ESX or ESXi host’s hardware physically fails? If you hadn’t planned for this type of scenario,  you are looking at restoring ESX to a new machine and moving the guest server’s files to that new machine’s storage.

In a vSphere environment you need shared network and shared storage and a second ESX(i) host can easily start all of the guest servers. You can easily find each machine’s vmx file(from the failed host) and add it to the second Host.

That sounds easy enough in principal but for a small business with no SAN in place it gets a little tricky. There are plenty of NAS solutions out there that you can use as an NFS or iSCSI target but redundancy gets expensive quickly.  If your shared storage fails then it won’t matter that you have 2 ESX(i) hosts. Neither will be able to access the vmx and vmdk files of the guest servers that need to be started.

For this specific scenario we are implementing an HP/Lefthand P4000 Virtual San Appliance. In our real-life scenario we are replacing an aging infrastructure for a customer with 100 users in 5 locations that currently have no virtual infrastructure.

What this will look like: We end up with 2 ESXi host machines that are identical DL380-G6  servers. Each host has 2TB of local storage. We need one physical machine running vCenter server and to act as a 3rd vote, to facilitate failover between the hosts.  Again we’re going to use a DL380 but we don’t need to give it as many CPU’s or memory We have a Cisco 3750 connecting everything together, and that connects to the client’s existing physical network. Total of 6U in one rack.

What we are replacing:

  • 1 FC SAN with 4 LUNS being used by 6 physical machines in an MSCS cluster.
  • 2 SQL 2000 servers (MSCS)
  • 2 Openedge Progress servers (MSCS)
  • 2 Terminal Servers/File servers (MSCS and NLB)
  • 2  IIS servers in an NLB cluster used for the intranet.
  • 2 domain controllers.
  • 1 backup server.
  • 1 Proxy server.
  • 2 DMZ web servers.
  • 1 Accounting Server (Runs Terminal services and Quickbooks)
  • 1 Exchange 2003 server
  • 1 Phone server (Shoretel Shoreware director)
  • 4 of 6 2900 series switches.

We don’t yet have any existing example to gauge overall future performance in this scenario so we may find that some servers do not get migrated to these hosts. For now, it looks like we can replace everything and still see a substantial performance gain in places.  For ease of writing I am going to ignore that possibility and talk about it more if  ‘Real-Life’ eventually encroaches upon the thread.

Background Information: This is one company with a central office and effectively 4 satellite offices. that connect to the central office using a point to point T1. One of the offices acts as it’s own company and has some resources that are dedicated to it on site, and is the only user of the Quickbooks server.

The SAN holds 8 production databases. The largest database is 80 GB but only has 35 user that access it. These users access the database through a web based application that runs on the IIS servers. Users are all on workstations running Windows 2000 or XP and currently all servers are Windows 2003. The SAN is fiber channel and uses HSG80 controllers to present the storage to the Windows machines. Within this site most applications, storage, server roles and network paths are redundant. Including switches and phone equipment there are 8 full racks containing all of the servers, network hardware and phone hardware. We can potentially replace  6 of those racks with a solution that removes the complexity of MSCS and NLB, decreases latency between the application servers and the database servers and makes more resources available to all applications. The potential down side is going to be related to general best practices for storage, related to Exchange, SQL and Progress. In this scenario we are banking on the idea that we will see enough of a performance gain using new hardware that we will negate performance hit that we take, spanning the volumes that contain log files, databases and operating systems across, the same 16 disks in a raid 5 array.

Leave a comment

Filed under KBs related to this project, Lefthand VSA on Vsphere on P4000, VMWARE

How to create an etherchannel (Port-Channel) between Cisco and vmware ESX/ESXi

The load balancing needs to be set up on both the vSwitch and the switch stack.

In vCenter:
Highlight the ESX Server host.
Click the Configuration tab.
Click the Networking link.
Click Properties over the target vSwitch
Highlight the virtual switch in the Ports tab and click Edit.
Click the NIC Teaming tab.
From the Load Balancing dropdown, choose Route based on ip hash.
Verify that there are two or more network adapters listed under Active Adapters.

In an enable prompt on the stack:

Remove any extraneous commands from your target interfaces.
Any differing settings between the interfaces could cause the interface to not be added to the etherchannel.
In this example I am using a switch stack consisting of 2 3750s. I have 2 NICs on the host connecting to an etherchannel that spans these 2 physical switches in the stack. In this way I am able to have one switch physically fail (or loose power) and the connection stays up. I am using interface 21 on both switches in the stack and using channel-group 28.

#>Config t
#(config)> int range gi1/0/21,gi2/0/21
#(config-if)> channel-group 28 mode on
because the port-channel does not already exist this creates the port channel.
Check to see the number of ports in the etherchannel
#> sho etherchannel summary
Test the load balance type
#>sho etherchannel load-balance
If the etherchannel load balance type needs to be changed set it to source and destination ip
#>config t
#(config)> port-channel load-balance src-dst-ip
test the load balance for an address behind the interface
#> test etherchannel load-balance interface port-channel 28 ip
#> test etherchannel load-balance interface port-channel 28 ip
Save the changes
#>write mem

Leave a comment

Filed under KBs related to this project, Lefthand VSA on Vsphere on P4000

VLAN trunk and tagging between Cisco and ESX(i)

IEEE 802.1Q VLAN tagging  between Cisco Trunk ports and vSwitches

Be sure to use dot1q and not isl encapsulation on trunk interfaces that will be connected to the physical NIC on your host.

By default the switch will allow traffic from all VLANs through the trunk port.
Configure the interface of the switch with these 2 lines:
1.) Switch#(config-if)>  switchport trunk encapsulation dot1q

2.) Switch#(config-if)> switchport mode trunk


Leave a comment

Filed under KBs related to this project, VMWARE