In many cases, Docker provides many advantages over non-Docker deployments, even for old netfx apps. You get the reproducible build environment, the use of compose and swarm for testing and production and much less mess on the hard disk since most of what you need to run it is already in the container.
However, understanding Microsoft's set of Docker images has become as unfathomable as their plethora of .Net versions. In the name of agility, they seem to change a lot of things significantly in many places, which makes lots of documentation instantly out-of-date and leads to them dropping almost all support on existing frameworks because "just use .Net 5.0"
Anyway, I thought it would be easy to dockerise a netfx application since I have already dockerised dotnet core on Linux images, so didn't expect it to be much harder.
What you might not know is that because .net framework is based on System.Web which relies on IIS, you need quite a full-blown version of Windows to run .net framework apps. You could possible do this on the newer server core (nothing to do with dotnet core!) and nano containers but then you would have to add a tonne of stuff yourself.
Instead, you need to use the aspnet docker images, based on Windows server 2016 or 2019 depending on your target version.
Do NOT attempt to use Docker hub to work out what you need. The pages are a mess, they are inconsistent between different repositories, the tags don't seem to make much sense, some have been renamed and some of the tags are not listed. Add in the confusion of words like Nano and Core (not to be confused with dotnet core) that you have probably not heard of before as well as the Semi-annual channel etc. etc. and you will quickly go grey-haired.
Instead, assuming you understand the basics of Docker, you will need to start with a base SDK image that will allow you to build your application:
I called this stage prepare but you can call it what you want. Note the 4.7.2 tag, which is my app version. On most of these images, you can always install stuff yourself but then you need stable download locations and need to take the extra time for these parts to download/install/build so getting a maintained image with latest windows updates etc. is always nice.
One of the issues you are likely to come across with Windows containers if the fact that backslash is the default escape character for Docker and is also the line separator. Mixing this with Windows paths can be confusing and it breaks the backtick used for separating Powershell commands. If you do not set the escape character to backtick at the start of the file, you will get muchos confused so do that!
Another thing you will need to get good at with Windows containers is how to use layered builds and Docker's caching mechanism to massively reduce build times. Basically, move the least changing code to the earliest part of the build you can. Imagine you put npm install after you build your code. Any time you change a file, it will rebuild which means it will also run npm install again.
It is convention to use the /app directory for all the building work so we set the working dir and then copy over any files we need for nuget restore. This might be a single project or an entire solution. Don't forget to include any packages.config and nuget.config files that are needed:
And then you will need to run nuget restore in the relevant place. In my case, there is a nuget.config in the solutions folder and this is where the command is run on our current Team City build so it is easy to set that up
This is a slow and not very changing layer so it is really early on. The next part is that since I need to use NPM, this is also a rarely changing part so I am going to do it now before building the project. I could have done it before this copying part but whatever!
Something that is important here is 1) I am using a known-version of Node 2) Be very careful splitting run commands over multiple lines. If they are all related, it is faster to combine them because then they only produce a single file system layer but if you get them wrong, for example the semi-colon confuses the command interpreter, then you might end up setting your registry incorrectly! Since Docker is so good at caching it 100% makes sense to build these up line by line but then testing it by running the container and using the CLI. For example, my installation of gulp-cli was failing but you couldn't see it from the docker build console, it failed later when running gulp build. Using the CLI, I could have ensured it was installed.
Now that this is done, I can start looking at my project and copying everything else
Why not copy everything earlier on when we copied the vcproj? Because then any change to any file in the directories would cause the entire build to restart. This way, only changes to the csproj or vcproj files will trigger the earlier part of the build, otherwise we can quickly get to here.
Then we move into the relevant app that we need to run npm install against and run that with a gulp build. Note I am using .npmrc for Azure devops and deleting it so it doesn't end up exposed in the output container. I have added this to the local project just for the build to take place but I need to find a better way of passing the secret token into the docker build.
Once this has run, again, it is slow and I should probably have done this after only copying the relevant source files to make it use caching again but otherwise we just need to build.
This is building for unit tests so I am using debug. I have also set the warning level to 0 to avoid the gazillions of project warnings. I have not yet implemented the unit tests so that part is missing.
Then we build for release
We then take the chance to create a runtime image. We don't need the SDK which in Linux saves a large percentage of the image size but in windows doesn't save much of the 15GB approx container! We use the following image and have to download and install the Url Rewrite module for IIS.
We copy a static file, which will eventually become a secret and copy the output of the build from our build stage. I am not sure whether I should be precompiling and publishing instead but I just wanted to get the container up and running to prove it works
It did work (after about the 30th attempt) although it did take a long time to start up. Also note that on Windows, you cannot access the container via localhost or 127.0.0.1, you need to use docker inspect to find the ip of the container and access it using that ip and the container port (80 in our case).