Windows Server 2012 Evaluation – convert & activate to fully licensed

Update:   This post was made in 2013 (5 years ago at time of writing). It’s clear from the many comments it sometimes works and sometimes doesn’t, and I suspect Microsoft may have broken it with various different install builds, or it just didn’t work with certain releases and configurations. Most recently, I installed a full copy of Win 2012 on a new server (using official Open License ISO media), it didn’t ask me for a license code on install, and then when I tried to use the below method (post install) it wouldn’t work – thanks MS!

I believe I found that using the following command  from the cmd prompt did the trick:

slmgr -ipk

It allowed me to replace the 25 digit generic / trial code MS had installed the OS with with my proper license key. If you type in the above and hit enter, it should give you all the available parameters to try and achieve what you want. No idea if it will let you apply an OEM or retail key to an Open License install, etc! Good luck :-)


So I’m at the end of the trial period for Windows Server 2012, and having a bought a volume license for the Data Center edition, I need to activate it. Microsoft have taken away the ability to alter product keys through Control Panel -> System so we have to use the command line.

I’ve read a lot of articles out there on this, which generally don’t work, presenting an error when you try and process your new key using the slmgr command line tool.

First of all, you need to establish your exact currently installed version. From a elevated command prompt, run the following command:

DISM /online /Get-CurrentEdition

In amongst the blurb that appears on screen, it will tell you your current edition (in my case ServerDatacenterEval). Make a note of this – you will use in the next command with the last ‘Eval’ bit ommitted.

With your license key to hand, now run following command:

DISM /online /Set-Edition:ServerDatacenter /ProductKey:XXXXX-XXXXX-XXXXX-XXXXX-XXXXX /AcceptEula

The above unboldened / italicised entries will need to be your own specific variables (remember to drop the ‘Eval’ bit for the Set-Edition). I believe, you can also use this as an opportunity to upgrade to a higher edition, for example using the /Set-Edition switch to go from Standard up to Datacenter. The /AcceptEula switch allows the system to silently accept the Microsoft license agreement.

When you run this command, your system will need to restart 1 or 2 times. Thereafter (if it doesn’t happen automatically) you will be able to activate with your newly provided key from Control Panel -> System or using the slmgr tool, and you will now be running your licensed copy :)

Use OpenSSL & Windows to Convert UCC / SAN certificate from .crt / .key format to a .pfx for Exchange 2010

This post assumes you have already completed the process of getting a signed certificate issued and installed on a Linux / Apache server, and that you would like to convert that certificate to install and use on an additional Exchange 2010 server. The cheapest UCC (multiple FQDN on the same certificate) I’ve found is with GoDaddy.

Download a Windows implementation of OpenSSL from here: I recommend getting the full 32-bit version, and you will most likely need the ‘Visual C++ 2008 Redistributables’ as well. Install OpenSSL to C:\OpenSSL or another location that is convenient to you. Make sure you run Windows Update post install, to check for any security patches for the C++ 2008 redistributable you’ve installed.

With OpenSSL installed, you will need to get copies of the public and private keys downloaded that make up your certificate config. These will be stored somewhere on your existing web server. I have a VPS with Host Gator, which uses a pretty typical Linux distribution called Cent OS, and which can be accessed via SSH / Secure FTP. I personally connect up using a secure channel through FireFTP, a free addon for Firefox. It is essential you use secure means for all file downloads, as interception would completely compromise your certificate’s security. And don’t do this on a public / shared PC!

Having connected to your web server, browse to /etc/ssl from folder root. Here should be a folder called certs, which you need to download one or two files from. Find and download your certificates, in the form of and, if applicable, your Certificate Authority’s bundle, normally in the form of Download these files to C:\OpenSSL\bin

Additionally within /etc/ssl you will also find a folder called private , and within here you need to locate your private key file, listed in the form of Also download this to C:\OpenSSL\bin.

Fire up a Windows Command Line (cmd) and type cd C:\OpenSSL\bin

Then Type openssl to fire up the Openssl command line. At this command line, enter the following command, replacing the file entries with your own appropriate ones:

pkcs12 -export -out -inkey -in -certfile -name “Friendly Name”

This will take your existing certificate, private key and Cert Authority bundle, and generate a pfx file compatible with Exchange 2010 / Windows (The -name switch will define the friendly name for the certificate, as it will appear in your Exchange Management Console later). You will be prompted to enter a password twice – keep a note of this, and you’ll need it later.

The generated .pfx file will now be residing in C:\OpenSSL\bin. Move it to location where your Exchange Server will be able to access it (such as a secured network share).

From your Exchange Server, load of the Exchange Management Console, and navigate to the root of Server Configuration. Look to right hand pane for the Import Exchange Certificate … Click this link, and then locate your pfx file, enter the password you set, then complete the wizard to import the certificate. You now just need to bind the Exchange services (such as SMTP) you would like associated with this certificate.

I STRONGLY recommend you now delete all of the errant files you downloaded from your web server, or generated with OpenSSL, to complete this task, and the job is done! :)

Can’t remove System Center 2012 Endpoint Protection client (it just keeps reinstalling)

Even after removing all traces of System Center 2012 from our AD network, when I uninstalled the client software from each user system, within a few hours (or post a reboot) the software would mysterious regenerate and come back. Simply uninstalling the client is not enough. Here’s what you need to do.

Working on the assumption you are using Windows 7, go in to control panel and select Programs and Features. Uninstall:

System Center 2012 Endpoint Protection Client.
Windows Firewall Configuration Provider

Having done this, load up the command line (cmd) with administrative priviledges (search for and right click to get this option). Then type in the following commands at the prompt:

cd c:\windows\ccmsetup
ccmsetup /uninstall

The cursor will just return without any confirmation of completion. I found it took up to 10 minutes for the uninstall to release a lock on the setup files (and presumably possibly this long to actually carry out the uninstall). So after 10 mins, close the command line window, and do a search for %windir%. In that very folder, you should find two folders CCM and ccmsetup. Highlight and delete them. You should be prompted for admin authorisation, and then (if the uninstall is truly complete) you should see them delete. Otherwise, try to repeat deletion after a few more mins. If you see a folder called ccmcache the uninstallation is definitely still running.

After this, reboot your PC, and you should finally be free of the System Center 2012 Endpoint Protection client (assuming you haven’t set it up to reinstall by Group Policy or an Active System Center server).

Updating ESXi / vSphere 5 using the CLI (Command Line Interface)

In a previous article, I blogged on how to update ESXi 4.0 to 4.1. The pricipals I described could be applied to all updates of version 4 of ESXi / vSphere that followed thereafter.

Unfortunately, with the release of version 5 of ESXi, the method I described previously no longer works when patching the system. If you attempt it, you get the following error message:

“This operation is NOT supported on 5.0.0 platform.”

Instead, you now need to use a different command line tool, which I shall now describe.

First off, you need the newest release of the CLI install package from VMware, which can be freely downloaded here.

Having installed, I highly recommend a restart of the workstation you using the CLI on. On my system, some of the Perl related libraries the CLI depends on didn’t seem to work till I did a restart.

Having restarted, bring up a Windows command line prompt, ideally in elevated admin mode (to make sure you have unrestricted access to your own system). Normal mode should be ok, as long as your update files have been downloaded to a local file system that can be accessed by your user account.

You also need to have the VMware vSphere Client installed, which you should be able to get (if you don’t have it already) just be entering the IP address of your ESXi host in a web browser window. The web server running on there should give you a download link.

Finally, go on the VMware website, and download the latest patches you want to apply to your ESXi 5 setup. At this present moment in time, I was applying the major Nov 2011 update, which brought me up to ESXi 5.0.0 build 515841.

For simplicity, having downloaded the latest ‘vib’ update (VMware’s terminology for an update archive, normally in the form of a ZIP file), I renamed it to

Having done this, you will need to upload the file into one of your storage volumes via the vSphere client. Note the path location to where you put the file. Furthermore, make a note of the full storage mount location of the actual data store, which can be found by selecting the datastore under Configuration -> Storage in vSphere client, and looking in the bottom panel labelled Datastore Details. As much as a pain it might be, my findings were that it was easiest to take the whole entry next to ‘Location’ and not substituting the common name of the datastore (so the entry you need will have a long GUID type path entry).

Go back to the Windows command line and type one of the following from the command prompt in order to get in to the CLI script folder:

For 32-bit OS:
cd C:\Program Files\VMware\VMware vSphere CLI\bin

For 64-bit OS:
cd C:\Program Files (x86)\VMware\VMware vSphere CLI\bin

You may also wish to enter your host in to maintenance mode before doing the update, which can be easily done from the vSphere Client.

Finally type the following command:

esxcli -s -u root -p password software vib install -d

In the above, I’ve put my personal entries in unboldened italics. You will need to put in your own entries as follows:

-s    Your Server IP or Hostname
-u    Your ESXi host admin user name, normally ‘root’
-p    Your ESXi host password
-d    The location of your file, using the location of the datastore, and the further folder location you may have created to put the update in (the ‘d’ stands for ‘depot’, by the way).

When you hit enter, the cursor will drop for a time, and nothing may appear to happen for several minutes. If the update is successful, you will eventually get an installation result message confirming all is ok, with a list of all updates applied. If it fails, I assume it will tell you, but it has yet to fail for me :)

You will then need to go back in to the vSphere client, restart the ESXi server, and then take it out of maintenance mode. Then manually fire up any virtual machines you have. You should then be done.

There may be a way of applying multiple updates simulteneously, but I’ve yet to need to do that. Doing one file at a time should work ok though (I suggest oldest updates first).

Microsoft Forefront TMG 2010 won’t upgrade to Service Pack 2

On trying to upgrade to service pack 2 for MSForeFront TMG (Threat Management Gateway), I repetitively got the below error:

“The upgrade patch cannot be installed by the windows Installer service beause the program to be upgraded may be missing, or the upgrade patch may update a different version of the program. Verify that the program to be upgraded exists on your computer and that you have the corect upgrade patch.”

This one has me beaten for a while. I had already upgraded to SP1 for TMG, and I couldn’t see why upgrading to SP2 wouldn’t work. Furthermore, in recent years Microsoft have generally allowed you to jump service packs anyway (such as going to a Service Pack 2, whilst still having the original RTM of a given product).

I dug around a bit, and found there is an interim update for TMG, post SP1, that must be installed to install SP2. This update (unsurpisingly named “Software Update 1 for Microsoft Forefront Threat Management Gateway (TMG) 2010 Service Pack 1”) can be found here.

Install this, then try running the SP2 for TMG update again. You should find it all goes well.

Avast! Business Protection [Plus] – exclamation mark on client Avast and console, when shields are disabled

Avast Business Protection [Plus] has recently been released – a way overdue update to Avast’s business targeted line, which has previously been left on the 4.8 code base since the beginning of 2010 (when Avast 5.0 came out for home users). I’ve had various problems with this update, mostly to do with licensing, but I’ll save that for another blog….

In the new admin console I have created different groups for different servers / workstations, dependent on their shield (module) need, and dropped the PCs discovered on the network in to the correct groups. For example, our general file servers do not need the SharePoint or the Exchange shield scanning plug-in to be enabled; it’s an unnecesary overhead, and at best just pointless to be on.

Disabling the unneeded shields was nice and easy – can all be done under the group settings for a collection of computers (under the sensibly named ‘Shields‘). But whilst this worked, the end result was not good; on the client side (the server or workstation running the pushed out copy of Avast Business Protection client), I got an exclamation mark on the taskbar like this:

And in the Avast! Administration Console, for the given computers:

It would seem there are no easy tick boxes to stop this problem; you have to find the solution by clicking a big scary button…

To start, go to edit the settings for your given group that you want to modify shield monitoring. In the window that appears, click on the bottom left hand option ‘Expert settings’.

Next, click the big scary button that reads “I’ll take the risk, show the expert settings”

No doubt, you could seriously mess-up some installs if you alter some settings – I don’t know what most the functions do. I played with a test setup first, and was delighted to find what I needed to correct these problems.  Let’s assume we just wanted to disable monitoring of the Exchange module, on systems that don’t have Exchange. Scroll down in the list, and find:


You’ll notice it has a value of ‘1‘ set to the right of it. Double click on this value, and change it to ‘0‘.

Left click save at the bottom of the window, and within one minute, your client system’s exclamation mark should have gone, and your admin console for the system should look more like this:

You’ll notice there are lots of other ‘PropertyPowerbar‘ options in the same area of expert settings. Zero-ing out any of these will stop the Avast client monitoring that shield, and the fault of bringing up the explanation mark. Be careful – you don’t want to zero out a shield you are actually running, as if this shield is at anytime disabled by some rogue virus or the like, it won’t show up on the console.

This seems a silly error that Avast will no doubt fix at some point in the future – if nothing else, disabling a given shield should zero the value for you. At the start of August 2011, this is yet to be seen – early days! :)

Adding .admx GPO templates for Win 2008 Group Policy and beyond…

For a long time I have tried to add .admx files to individual group policies using the management editor, as I do with older .adm template files. However, whilst the conventional Add/Remove Templates method works for the old school .adms, it gives the following error message if you try and add an .admx:

“file.admx is not a valid template file. Only files that end with the .adm file externsion can be added to this Group Policy Object.”

Why does this happen? Because Microsoft revised their policy on where the templates were stored and implemented from. Now, you just need to make sure that your required .admx files are placed in the %systemroot%\PolicyDefinitions folder. Also, the .adml files (which should be provided) need to be placed in the appropriate language subdirectory (such as en-us) of this folder, for the policies to list and work correctly. This only needs to be on the system you manage group policy from, not every DC in your Active Directory network.

The upshot of this new method is that for every .admx / .adml file you add to this folder tree, it is automatically available to all GPOs managed with that system. Converesely, I believe under the old system that you had to add the .adm file to every individual policy you wanted to use the template in.

SharePoint 2010 – Search and other Web Apps don’t work out the box

Every time I tried to do a search on my SharePoint server, it came back with an error as follows:

“The Web application at http://sharepoint.domain.local/ could not be found. Verify that you have typed the URL correctly. If the URL should be serving existing content, the system administrator may need to add a new request URL mapping to the intended application.”

For a while it had me beaten. Then I realised the problem – elements of SharePoint will not work properly if the Internal FQDN (Fully Qualified Domain Name) is used from a browser instead of the Host name, because elements of SharePoint will only work when using the pre-defined names it knows. To illustrate the point, by connecting to the server via http://sharepoint/, the search process worked perfectly.

So how to fix this? In an ideal world you want to be able to have SharePoint working fine on the Hostname, the Internal FQDN and [depending on your setup] the external FQDN. And here is how you do it……

1) Fire up the SharePoint Central Administration.

2) On the opening page, look under the System Settings heading for Configure alternate access mappings.

3) From here you can edit, add or ‘map to external resource’ a URL. In this case, I am going to add a URL for my internal FQDN. So I click Add Internal URLs.

4) Next, you need to select the entry for Alternate Access Mapping Collection. I click the drop down link to do this and use the change option, and in the proceeding window select my main SharePoint site. This then takes me back to the previous window with this option selected.

5) In the field for Add Internal URL, I need to add my FQDN, protocol and port number. As I am operating on port 80 / http, this is set as: http://sharepoint.domain.local:80. My required domain is internal, so from the zone list I select Intranet then click on Save.

6)You’re done! Fire up a browser using the newly added domain name, and you will find all should now work. Note that if you are applying an internal domain name that is completely different from the SharePoint host name, you will need to make changes to your DNS servers records to reflect this, or else the name won’t resolve.

Google Earth – change location of ‘My Places’ / .kml files

Update: this post was made in 2011 (at time of writing it is 2018) and in that time Google Earth has no doubt changed considerably. I haven’t even used it in about 5 years, and I would imagine there is an easier way of doing things now – but perhaps not! Either way, best of luck to you if you are still looking for a current solution, and I’m closing comments on this post now. Cheers!

When you create drop pins on Google Earth, and add them to ‘My Places’, the underlying information is stored in several .kml files. By default, under Windows 7 at least, the location of these files is in:


In my view, this is a bad place for a number of reasons. The main one for me is that I use redirected folders to keep my app data on a network server, and this server is backed up nightly. With the default Google Earth config, the kml files just sit on the local PC, don’t get backed up, and would be lost if the hard drive ever went down. Also, I like to hot desk between computers, and with the default config my .kml files aren’t going to be following me.

As far as I can see, the solution is simple. open up regedit without elevated persmissions, and drill through until you find the entry:

HKEY_CURRENT_USER\Software\Google\Google Earth Plus\KMLPath

If you bring up the data entered for KMLPath you will see the aforementioned path location in there. Completely remove this. You can now replace it with another local location, or a network location. The location must be a complete path; I found variables such as %username% do not work. So, for example, I changed mine to:

\\Server1\Redirected Folders\bobby.c\Application Data\Google\Google Earth

You must make sure that Google is long since exited, and that the folder you map to does already exist on the server (just create it using win explorer). You should be able to copy existing kml files across from the old to the new location, and Google should roll with them. I would copy all kml files only (leave the cache et al where it is), and as ever make sure you keep a backup before doing this….

‘Verify that the Activity Feed Timer Job is enabled’ error in SharePoint 2010

I’m only just breaking open the box on properly using SharePoint. Every test install I have done, I have been hampered by this same error. The solution is simple.

First off, completely ignore the link that Microsoft gives you for ‘help’ – it resolved nothing. Instead, from within the Central Administration home page, do the following:

Click the Monitoring title.
Under the Timer Jobs heading, click on Review job definitions.
Scroll down the list and look for User Profile Service Application – Activity Feed Job (might be worded differently on pre- SharePoint 2010 SP1). You’ll note this is ‘disabled’. Click on the title link, and then the Enable button in the page that follows. This will set the service to hourly by default, and in the span of time your problem should disappear from the problems list in Central Administration.