Monthly Archives: July 2010

How could I clean up stale user accounts without Quest?

I use Powershell, in some form, on a daily basis. Primarily I use the Powershell CLI for quick queries against WMI, getting data from SQL,  Data from Exchange and Active Directory. Active directory is the most interesting because I nearly always end up using commands that are only available using the Quest Active Directory CMDlets.

I was on a customer’s domain controller and noticed that they had a lot of service accounts created under the users CN in AD.  By service accounts, I mean that they were user objects that were named things like FTPsrv and that the ftp service runs on an FTP server using that user account. Because of the names being used, it was obvious that at least a few were not being used.

I’ll keep using the FTPsrv user as the example here, but basically from the PS CLI I just typed:

(get-qaduser ftpsrv).lastlogon and checked the last logon date to see if this account was being used.

Then, to see the same for all of these service accounts:

foreach($usr in get-qaduser |where{$ -like “*srv*){




I disabled the unused accounts and made a note in the description to delete by a certain date if there were no problems. I can search on that note later and delete the accounts.

This all seems simple but it occurs to me that it’s worth pointing out WHY I use the Quest AD cmdlet.

Without the Quest cmdlets, from AD users and computers MMC you can make a query that returns users, but you are limited to the preset options for numbers of days and the results won’t display the last logon date so that seems to be a place to start but not a useful place to actually get data. So we’re back to powershell to get our data.

There is a really good reason to use the Quest AD cmdlet.  That reason is that by default it’s pretty difficult to use PS to get useful AD information.  The first problem  is that you actually need to find domain controllers and search them individually, the next is that you need to convert time to a readable format.Finally you end up with a last logon date and time from each domain controller that will need to be compared.

$Username = “FTPsrv”
[DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain().DomainControllers | ForEach-Object {
$Server = $_.Name
$SearchRoot = [ADSI]”LDAP://$Server”
$Searcher = New-Object DirectoryServices.DirectorySearcher($SearchRoot,”(sAMAccountName=$Username)”)
$Searcher.FindOne() | Select-Object `
@{n=’Name’;e={ $_.Properties[“name”][0] }},
@{n=’Last Logon’;e={ (Get-Date “01/01/1601”).AddTicks($_.Properties[“lastlogon”][0]) }},
@{n=’Domain Controller’;e={ $Server }}}

This gets us a table with the entries from each domain controller.


Leave a comment

Filed under Active Directry, Powershell

Less expensive VMWare FT and HA for small business

Getting HA (High Availability) and FT (Fault Tolerance) in your VMWare environment without an Enterprise SAN…

The VMWare ESX environment opens up a lot of options for a company looking to replace servers. By adding ESX(i) into the scenario you are able to potentially save a lot of money and open up a lot of options that you wouldn’t have otherwise had. The idea of easily recovering a failed domain controller to new hardware, or reverting to a snapshot of a server that was taken before that last Windows update was installed is pretty compelling.

There are some complications in a small to medium sized data center.

What happens if your ESX or ESXi host’s hardware physically fails? If you hadn’t planned for this type of scenario,  you are looking at restoring ESX to a new machine and moving the guest server’s files to that new machine’s storage.

In a vSphere environment you need shared network and shared storage and a second ESX(i) host can easily start all of the guest servers. You can easily find each machine’s vmx file(from the failed host) and add it to the second Host.

That sounds easy enough in principal but for a small business with no SAN in place it gets a little tricky. There are plenty of NAS solutions out there that you can use as an NFS or iSCSI target but redundancy gets expensive quickly.  If your shared storage fails then it won’t matter that you have 2 ESX(i) hosts. Neither will be able to access the vmx and vmdk files of the guest servers that need to be started.

For this specific scenario we are implementing an HP/Lefthand P4000 Virtual San Appliance. In our real-life scenario we are replacing an aging infrastructure for a customer with 100 users in 5 locations that currently have no virtual infrastructure.

What this will look like: We end up with 2 ESXi host machines that are identical DL380-G6  servers. Each host has 2TB of local storage. We need one physical machine running vCenter server and to act as a 3rd vote, to facilitate failover between the hosts.  Again we’re going to use a DL380 but we don’t need to give it as many CPU’s or memory We have a Cisco 3750 connecting everything together, and that connects to the client’s existing physical network. Total of 6U in one rack.

What we are replacing:

  • 1 FC SAN with 4 LUNS being used by 6 physical machines in an MSCS cluster.
  • 2 SQL 2000 servers (MSCS)
  • 2 Openedge Progress servers (MSCS)
  • 2 Terminal Servers/File servers (MSCS and NLB)
  • 2  IIS servers in an NLB cluster used for the intranet.
  • 2 domain controllers.
  • 1 backup server.
  • 1 Proxy server.
  • 2 DMZ web servers.
  • 1 Accounting Server (Runs Terminal services and Quickbooks)
  • 1 Exchange 2003 server
  • 1 Phone server (Shoretel Shoreware director)
  • 4 of 6 2900 series switches.

We don’t yet have any existing example to gauge overall future performance in this scenario so we may find that some servers do not get migrated to these hosts. For now, it looks like we can replace everything and still see a substantial performance gain in places.  For ease of writing I am going to ignore that possibility and talk about it more if  ‘Real-Life’ eventually encroaches upon the thread.

Background Information: This is one company with a central office and effectively 4 satellite offices. that connect to the central office using a point to point T1. One of the offices acts as it’s own company and has some resources that are dedicated to it on site, and is the only user of the Quickbooks server.

The SAN holds 8 production databases. The largest database is 80 GB but only has 35 user that access it. These users access the database through a web based application that runs on the IIS servers. Users are all on workstations running Windows 2000 or XP and currently all servers are Windows 2003. The SAN is fiber channel and uses HSG80 controllers to present the storage to the Windows machines. Within this site most applications, storage, server roles and network paths are redundant. Including switches and phone equipment there are 8 full racks containing all of the servers, network hardware and phone hardware. We can potentially replace  6 of those racks with a solution that removes the complexity of MSCS and NLB, decreases latency between the application servers and the database servers and makes more resources available to all applications. The potential down side is going to be related to general best practices for storage, related to Exchange, SQL and Progress. In this scenario we are banking on the idea that we will see enough of a performance gain using new hardware that we will negate performance hit that we take, spanning the volumes that contain log files, databases and operating systems across, the same 16 disks in a raid 5 array.

Leave a comment

Filed under KBs related to this project, Lefthand VSA on Vsphere on P4000, VMWARE