Background

I have been using PHP cloud services on Azure for a web service used for PixelPin. I like this model because it (theoretically) means I don't have to manage or worry about anything at the operating system level. I create an application and deploy it directly to the cloud service and Azure takes care of the provisioning and scaling, including duplicating the installing for multiple machines. Updating is usually fairly painless and it allows me to concentrate on what I'm good at: writing web apps.

That's the theory. The problem is that the PHP support is very much the poor cousin of the .Net integrations and that limits various things. It means you cannot deploy from Visual Studio since VS doesn't natively support PHP (yet?), this in turn means you have to deploy with Powershell Scripts, which in itself is OK, except that there is a nightmare related to conflicting versions of libraries and bugs in the scripts.

I thought these had settled but the latest version has a weird bug (which is known but apparently not resolved) where the XMLSerializer that writes out the configuration files (for reasons that I don't understand - they are my config files to edit) writes out an XML declaration with UTF-16 specified even though the files are UTF-8 and this screws up the upload. The fix was supposed to be downgrading the tools version to an earlier version, which I did, but which then totally stopped working and would no longer generate the upload package. No errors, no nothing. There appeared to be Fusion log errors but why couldn't these scripts find assemblies and more importantly, why weren't these errors reported from the script rather than it pretending to work?

There are other problems too. The tools don't allow you to deploy to Windows Server 2012, despite PHP being compatible, and this was caused by the scripts that run on the instance to install PHP etc. somehow not working on 2012 - something I didn't want to debug and I definitely didn't want to start changing source code and trying to rebuild stuff.

The Solution

The solution I decided was to use plain virtual machines again (platform-as-a-service), something I was not very happy about but which seemed the only option.

I was not sure how this would all work because Cloud Services hides all the details from you. For instance, I could create a correctly configured server but then how do I make it scale? How do I duplicate the VMs, can this be automated or would every update have to be done manually, one at a time?

This was my journey (it will be long!).

Creating the VM and setting up IIS

Creating the VM was all pretty straight-forward and then I logged in with Remote Desktop to stat setting things up. The VM doesn't come with any roles, so the first thing to do is add the web role using the Server Manager screen. This basically adds IIS. I didn't add any particular extensions but that is up to you.

It is my simple experience of these long configurations that you make regular tests to ensure each step is working as expected. For instance, you could go straight ahead and install PHP but if it didn't work, it might actually be IIS that isn't working. A simple visit to localhost in IE proved that IIS was basically working - so far so good!

Install and Test PHP

The Web Platform Installer is a fairly useful way of installing MS related software. It is not installed by default on the Azure VM so you can get it from here: Web Platform Installer but be warned, the dreaded "IE Enhanced Security Configuration" is on by default and that just makes most navigation of web sites a real pain, so you might want to disable it for now (in the server manager).

Once you have run up the web platform installer, search for PHP and there will be various versions available. I installed PHP 5.5 and also the Windows Cache Extension for PHP 5.5 which helps with content caching (I don't really know what it does!).

I then created a new site in IIS. You could reuse the existing default web site but I like keeping the default web site the default web site (or remove it altogether) to make my sites a little harder to find for people who just hit IP addresses, whereas I can add a host header for my site so it is found correctly only by URL. I added the host header on this new site

Creating a new site, I adjusted the application pool and changed the .Net CLR Version to "No Managed Code" since it will be a PHP site. I left the pipeline as "Integrated" but I don't think that means anything without .Net code. If you leave the application pool identity as "ApplicationPoolIdentity" then you will need to setup permissions on your PHP web root to allow that user to access the folders. Instructions are here: App Pool Identity Understood

Once that was done, I went into my PHP web root (which I created alongside wwwroot in inetpub just for consistency) and created a test PHP file, which just echoed phpinfo() I saved it as index.php. Note that phpinfo contains information useful to an attacker, you should generally not dump that data to the web site once the public port has been opened unless it is in an obscure filename that an attacker would not easily find. You can simply echo something like "Hello world" to prove that you are reaching the site during further testing.

Since I had already set up a host header in my site (remember, that dictates the need to visit the site via a url, not an IP address or locahost) I had to edit the hosts file on the server to point that test url to the localhost ip address 127.0.0.1. I then visited the site in IE using the full URL and it all worked? Nope, big crash! Opening up the Event Viewer, I found an error which wasn't formatted very clearly but which was caused by php-cgi.exe and mentioned msvcr110.dll.

It seems that there is a dependency on php-cgi.exe, which isn't installed by the web platform installer, possibly because it used to be installed on older versions or perhaps more standard versions of Windows Server. Anyway, I visited and installed both the 32 and 64 versions of this dll (apparently you need the 32 bit one even on Win64) from here: http://www.microsoft.com/... and then my page sprang to life!

Open the VM Endpoint

By default, the VM won't have opened up the web endpoint to the world, this is because when you create it, Azure doesn't know what you are going to use the server for. Fortunately, this is pretty easy to do.Note: You might have already set this up when you created the VM, in which case, just skip straight to testing that the open endpoint is working.

Open up the management portal and select the VMs icons in the left hand side. Important: Do not click on it via the "All" button, because otherwise you do not see the Endpoints menu (or at least I didn't - not sure if it's a bug or not). After clicking on VMs, click on the VM you are working on and you should see an Endpoints menu in the portal. Click this.

By default, there is an RDP and a Powershell endpoint (if you ticked the box when you created the VM). Obviously the RDP endpoint is needed for remoting (unless you tunnel via another machine on the virtual network). The powershell endpoint is for remote powershell operation, in other words, you can automate things by calling powershell as if you were on the remote machine - very useful but not something I need right now.

Hit "Add" and in the add dialog, use "Add a stand-alone endpoint" and then you can choose the name from a dropdown list of "known" endpoint types - e.g. http (if you want https, there will be more configuration to do on the box i.e. the installation of the cert and linking it to the web site). Don't tick any boxes about load balancing just yet - we only have one server - which will be deleted in a minute.

You can test the box, either by using real DNS lookup, or in my case, temporarily, by setting my LOCAL hosts file to direct the test URL to the public virtual IP address of the VM (as seen in the dashboard for the VM).

If you visit the site from your development machine, it should show. If you used phpinfo() in your test page, I suggest removing that now since the information contained within it is quite valuable to an attacker.

Clone the virtual machine into Image

In order to have a web server farm, naturally, you want to clone the contents of a VM. This link shows you how to do that on the source VM and then in the Azure portal. Note, this process will delete the original VM, keeping its disk in a state that can be used to create new VMs.

This involves running sysprep.exe on the source VM, which will capture the relevant system information and then shut the virtual machine down, setup an image and then delete the original virtual machine. You must manually delete the cloud service that would have been created for this virtual machine unless you plan to re-use it for your new load balanced farm.

Once this is done, you can create new virtual machines based on the image you have just taken.When you choose "Add" under Virtual Machines, choose the "From Gallery" option and then click "my images" on the left. Select the image you have just created and choose Next.

Note: In my case, the site I created under c:\inetpub\ was copied as part of the image, but I don't know exactly what is or isn't copied (I know networking settings are not) - so please test things to make sure it cloned what you think it cloned.

At this point, you can create the availability set and load balancing endpoint or otherwise see the next section about adding it later (you might as well add it now).

Once this has finished, you will have a replacement VM based on the image you took - which is a good test of whether it is correctly set up. If not, you can modify your new virtual machine and run sysprep again to create a new image until new VMs are created correctly. Even before you have created additional VMs, you should be able to reset your DNS to point to the new IP address of this VM and your site should still work.

Create Availability Set and LB

Creating multiple VMs for a service can give both performance and resilience benefits. Performance benefits because more than one VM can handle the incoming requests but also resilience because the multiple VMs can be created across "racks" in the data centre, each of which has redundant power supplies and network switches meaning a failure that affects multiple VMs is unlikely.

You might have already done this during VM creation but otherwise go to the configure tab in the portal and choose to create a new availability set and give it a name. The VM might have to restart when you save this change.

You can also modify the endpoint you created earlier for the web site and tick "Create a load-balanced set". You can then keep the defaults. The probe information is about how often the load balancer checks for unresponsive endpoints so they are not used until another e.g. 15 seconds at which point they will be checked again. I don't imagine my services will ever be unresponsive any time soon.

Create Additional VMs

By now you should have a single VM running in an availability set with a Load Balanced endpoint. You now need to create any additional VMs that you want in a similar way (From Gallery) with two differences.

Firstly, do not choose "Create new cloud service" but select the name of your first VMs cloud service. Also, do not add an HTTP endpoint, we will link the load balanced one after creation.

Once the VM is started, click to select it and choose "Endpoints". Click "Add" but instead of choosing "add a stand-alone endpoint", choose the option for "Add an endpoint to an existing load-balanced set" and select the load balancing endpoint you setup for the first new VM.

Once this is done, the easiest way to test it is to alter the default page on each server in some way that you can tell which server is supplying the page. Don't expose anything useful about the server, for instance, you probably should avoid returning the azure cloud service name (which is not exactly private but is not necessarily public either) and instead return something innocuous like "Server 1" and "Server 2"

Now visit the site and keep hitting ctrl-F5 to force refresh and make sure that you get pages served from both servers. They won't necessarily alternate exactly in turn but as long as you see both, it is working OK.

Deploying your Project

Deployment and change control is obviously a big subject and not something you should just jump into. My experience is that you want quick, easy deployments (especially when someone is waiting for an urgent fix!) but it should also be easy to rollback if you deploy something that doesn't work properly. In addition, you should have good visibility of changes, because trying to track down a bug if you are not properly tagging code changes can be really difficult. In a recent bug I had, because I could diff the code changes, I knew that the problem was a broken deployment and not a code error.

My code lives in Subversion and I tag releases before they go live. What I plan to do is to write a Powershell script that can use the remote powershell functionality to 1) copy the current live site to a backup directory 2) Pull in a labelled tag from Subversion into the web root and 3) Provide the option to rollback the deployment if it all goes wrong. I haven't written this yet but it should be fairly easy since it will mostly use svn command-line arguments and a few directory copies. It might have to use some IIS functions to point the site to different directories, which might be easier than copying files over the top of other files (which always goes wrong when a process has them open/in-use).