In many cases, Docker provides many advantages over non-Docker deployments, even for old netfx apps. You get the reproducible build environment, the use of compose and swarm for testing and production and much less mess on the hard disk since most of what you need to run it is already in the container.

However, understanding Microsoft's set of Docker images has become as unfathomable as their plethora of .Net versions. In the name of agility, they seem to change a lot of things significantly in many places, which makes lots of documentation instantly out-of-date and leads to them dropping almost all support on existing frameworks because "just use .Net 5.0"

Anyway, I thought it would be easy to dockerise a netfx application since I have already dockerised dotnet core on Linux images, so didn't expect it to be much harder.

What you might not know is that because .net framework is based on System.Web which relies on IIS, you need quite a full-blown version of Windows to run .net framework apps. You could possible do this on the newer server core (nothing to do with dotnet core!) and nano containers but then you would have to add a tonne of stuff yourself.

Instead, you need to use the aspnet docker images, based on Windows server 2016 or 2019 depending on your target version.

Do NOT attempt to use Docker hub to work out what you need. The pages are a mess, they are inconsistent between different repositories, the tags don't seem to make much sense, some have been renamed and some of the tags are not listed. Add in the confusion of words like Nano and Core (not to be confused with dotnet core) that you have probably not heard of before as well as the Semi-annual channel etc. etc. and you will quickly go grey-haired.

Instead, assuming you understand the basics of Docker, you will need to start with a base SDK image that will allow you to build your application:

FROM mcr.microsoft.com/dotnet/framework/sdk:4.7.2 AS prepare

I called this stage prepare but you can call it what you want. Note the 4.7.2 tag, which is my app version. On most of these images, you can always install stuff yourself but then you need stable download locations and need to take the extra time for these parts to download/install/build so getting a maintained image with latest windows updates etc. is always nice.

One of the issues you are likely to come across with Windows containers if the fact that backslash is the default escape character for Docker and is also the line separator. Mixing this with Windows paths can be confusing and it breaks the backtick used for separating Powershell commands. If you do not set the escape character to backtick at the start of the file, you will get muchos confused so do that!

escape=`

Another thing you will need to get good at with Windows containers is how to use layered builds and Docker's caching mechanism to massively reduce build times. Basically, move the least changing code to the earliest part of the build you can. Imagine you put npm install after you build your code. Any time you change a file, it will rebuild which means it will also run npm install again.

It is convention to use the /app directory for all the building work so we set the working dir and then copy over any files we need for nuget restore. This might be a single project or an entire solution. Don't forget to include any packages.config and nuget.config files that are needed:

WORKDIR /app

# copy csproj and restore as distinct layers
COPY Solutions/Acme.App.sln ./Solutions/
COPY Solutions/nuget.config ./Solutions/
COPY Presentation/Acme.App/*.vbproj ./Presentation/Acme.App/
COPY Presentation/Acme.App/packages.config ./Presentation/Acme.App/
COPY Libraries/SS.Core/*.vbproj ./Libraries/SS.Core/
COPY Libraries/SS.Data/*.vbproj ./Libraries/SS.Data/
COPY Libraries/SS.Logic/*.vbproj ./Libraries/SS.Logic/

And then you will need to run nuget restore in the relevant place. In my case, there is a nuget.config in the solutions folder and this is where the command is run on our current Team City build so it is easy to set that up

WORKDIR /app/Solutions
RUN nuget restore Acme.App.sln -Source https://api.nuget.org/v3/index.json -Source https://nuget.Acme.io/feed/index.json 


This is a slow and not very changing layer so it is really early on. The next part is that since I need to use NPM, this is also a rarely changing part so I am going to do it now before building the project. I could have done it before this copying part but whatever!

# Node js installation and running gulp
FROM copy as nodejs

SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop';$ProgressPreference='silentlyContinue';"]
RUN [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; `
    Invoke-WebRequest -OutFile nodejs.zip -UseBasicParsing "https://nodejs.org/dist/v14.15.4/node-v14.15.4-win-x64.zip"; `
    Expand-Archive nodejs.zip -DestinationPath C:\; `
    Rename-Item "C:\\node-v14.15.4-win-x64" c:\nodejs

WORKDIR C:\nodejs
RUN SETX PATH C:\nodejs
RUN npm config set registry https://registry.npmjs.org/
RUN npm install -g gulp-cli

Something that is important here is 1) I am using a known-version of Node 2) Be very careful splitting run commands over multiple lines. If they are all related, it is faster to combine them because then they only produce a single file system layer but if you get them wrong, for example the semi-colon confuses the command interpreter, then you might end up setting your registry incorrectly! Since Docker is so good at caching it 100% makes sense to build these up line by line but then testing it by running the container and using the CLI. For example, my installation of gulp-cli was failing but you couldn't see it from the docker build console, it failed later when running gulp build. Using the CLI, I could have ensured it was installed.

Now that this is done, I can start looking at my project and copying everything else

# copy everything else
FROM prepare AS copy
WORKDIR /app
COPY Presentation/Acme.App/ ./Presentation/Acme.App/
COPY Libraries/SS.Core/ ./Libraries/SS.Core/
COPY Libraries/SS.Data/ ./Libraries/SS.Data/
COPY Libraries/SS.Logic/ ./Libraries/SS.Logic/
COPY ThirdParty/SalesforceApi/ ./ThirdParty/SalesforceApi/
COPY ThirdParty/StripeApi/ ./ThirdParty/StripeApi/
COPY ThirdParty/XeroApi/ ./ThirdParty/XeroApi/

Why not copy everything earlier on when we copied the vcproj? Because then any change to any file in the directories would cause the entire build to restart. This way, only changes to the csproj or vcproj files will trigger the earlier part of the build, otherwise we can quickly get to here.

Then we move into the relevant app that we need to run npm install against and run that with a gulp build. Note I am using .npmrc for Azure devops and deleting it so it doesn't end up exposed in the output container. I have added this to the local project just for the build to take place but I need to find a better way of passing the secret token into the docker build.

WORKDIR /app/Presentation/Acme.App/
RUN npm install
RUN del .npmrc
RUN gulp build


Once this has run, again, it is slow and I should probably have done this after only copying the relevant source files to make it use caching again but otherwise we just need to build.

WORKDIR /app
RUN msbuild Solutions/Acme.App.sln /p:Configuration=Debug -r:False /p:WarningLevel=0

This is building for unit tests so I am using debug. I have also set the warning level to 0 to avoid the gazillions of project warnings. I have not yet implemented the unit tests so that part is missing.

Then we build for release

WORKDIR /app
RUN msbuild Solutions/Acme.App.sln /p:Configuration=Release -r:False /p:WarningLevel=0

We then take the chance to create a runtime image. We don't need the SDK which in Linux saves a large percentage of the image size but in windows doesn't save much of the 15GB approx container! We use the following image and have to download and install the Url Rewrite module for IIS.

We copy a static file, which will eventually become a secret and copy the output of the build from our build stage. I am not sure whether I should be precompiling and publishing instead but I just wanted to get the container up and running to prove it works

FROM mcr.microsoft.com/dotnet/framework/aspnet:4.7.2 AS runtime
ADD https://download.microsoft.com/download/1/2/8/128E2E22-C1B9-44A4-BE2A-5859ED1D4592/rewrite_amd64_en-US.msi c:/inetpub/rewrite_amd64_en-US.msi
RUN powershell -Command Start-Process c:/inetpub/rewrite_amd64_en-US.msi -ArgumentList "/qn" -Wait

COPY shared.config /hostingspaces/surveys/

WORKDIR /inetpub/wwwroot
EXPOSE 80
COPY --from=build /app/Presentation/Acme.App/. ./

It did work (after about the 30th attempt) although it did take a long time to start up. Also note that on Windows, you cannot access the container via localhost or 127.0.0.1, you need to use docker inspect to find the ip of the container and access it using that ip and the container port (80 in our case).

Enjoy!