Building an ASP.NET Core Docker Image on Linux

ASP.NET Core is super awesome, especially since it plays really well with the latest deployment methods. Microsoft has even gone as far as to pre-package an optimised .NET Core Docker image with the core libraries pre-compiled for a super-fast startup. So let’s get started, but first you’ll need to install Docker and the .NET Tools on your Linux machine if you haven’t already.

Add a Dockerfile

Add a new text file called ‘Dockerfile’ (case sensitive) to the root of your project, make sure it doesn’t have any extension (such as .txt). In the Dockerfile add the following code:

FROM microsoft/aspnetcore:1.1.0
WORKDIR /app
COPY ./output .
ENTRYPOINT ["dotnet", "MySite.dll"]

Update the ‘MySite.dll’ reference to the name of your project with a .dll extension. Also while you’re at it, change the version number of .NET Core if you’re not using 1.1 like I am. I highly recommend 1.1 or later with Linux due to much better performance.

Build from command line

Run the following commands in the project directory:

dotnet restore
dotnet publish -o output -c release

This will get all dependencies, compile a release build, and put the result into the ‘output’ directory. The reason we’re using a sub-directory is due to a bug in the tooling, if we use a parent relative path (../) the ‘publishOptions/include’ config setting in project.json will be ignored and you’ll be missing a chunk of your project!

Build the docker image

Now let’s get that code into a Docker image! Run the following command:

sudo docker build -t myapp .

Feel free to rename myapp to something a little more descriptive. You can also specify multiple tags, such as:

sudo docker build -t myapp -t myapp:1.0 .

It’s really up to you with regards to tagging. If you’re publishing to a repository (likely), make sure to add the applicable repository tag to the list.

Running the image

Run the following command to start up a new container:

sudo docker run -p 8000:80 myapp 

Make sure to update myapp to the name you used earlier when building. Your site should now be accessible on port 8000: http://localhost:8000

You may be wondering why we forwarded to port 80, and not 5000 or whatever port you’ve specified in launchSettings.json. As part of the aspnetcore image, an environment variable is set to tell Kestrel to host on port 80. This can be overridden either in your Dockerfile, or using app.UseUrl in your Program.cs file.

Don’t connect directly to the container

Remember, the site is still running in Kestrel so it’s not magically secure by virtue of running in a container. Make sure to use a reverse-proxy such as nginx when running your site in a production environment. Microsoft are working on hardening Kestrel so that you can use it directly in the future – but we’re not there yet.

Possible errors

Did you mean to run dotnet SDK commands?

You might get the following error upon execution:

Did you mean to run dotnet SDK commands? Please install dotnet SDK from:
http://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409

This basically means the .dll you referenced couldn’t be found. Double-check all of your paths and filenames. It’s easy to include the source instead of the output, or even save to / from the wrong directory.

Could not load <dependency>.dll

If you get dependency errors on execution, it means the output of the publish wasn’t saved to the /app folder within the container. The .dll files from the publish operation must all go directly into the root of the /app folder in the container. This means you can’t rename /app to something else.

More information

You can get more information from:

Github

Official MS Github: https://github.com/aspnet/aspnet-docker. Navigate to the version of .NET Core you want (1.0/jessie, 1.1/jessie, etc.). If you’re using the output of a publish, you’ll want the ‘runtime’ subfolder.

Docket

Official MS Docker Hub: https://hub.docker.com/r/microsoft/aspnetcore/. A quick overview of the base image, although it looks like MS want to spend most of their focus on the Github account.

Posted by Dan in Guides, 0 comments

How ‘this’ works in Javascript

The this keyword in Javascript has a certain reputation for being awkward. Fortunately, it’s very easy to learn how this works, and then leverage it for your benefit rather than detriment.

Let’s start with some code to play around with:

name = 'Global';

function myClass() {
    this.name = 'My Class';

    this.getMyName = function() {
        return this.name;
    };
}

function person() {
    this.name = 'Bob';
}

var classInstance = new myClass();
var nameMethod = classInstance.getMyName;
var personInstance = new person();

In this example, the function ‘getMyName’ gets the name value from ‘this’. Let’s see what it outputs:

console.log(classInstance.getMyName()); // My Class

So far, so good. But what if we call the method directly, and not via the class?

console.log(nameMethod()); // Global

Uh oh, we’re seeing the global ‘name’ value, not the one for the class we set up. What’s happening? Well, this is actually passed in implicitly as the first argument to the function – you just never see it. By default, this is the target you call the function on. So in reality the function definition and execution actually look like the following:

function(this) {
    return this.name;
};

console.log(classInstance.getMyName(classInstance));

When you call the method directly without the classInstance receiver, the global scope is passed in as this. As a result, we can do some pretty clever things. Remember how we have another class called ‘person’? We can detach the method from myClass and call it using person:

console.log(nameMethod.call(personInstance)); // Bob

Hang on, what’s this ‘call’ method? Call allows you to explicitly set the this value. The first parameter will be mapped to this, while all subsequent parameters will be passed directly to the function. If you don’t specify a parameter, call will implicitly use the global scope:

console.log(classInstance.getMyName.call()); // Global

Pretty cool, huh? This is how jQuery allows you to use this to refer to the element you’re working on with your anonymous functions and the like.

But, what if we want getMyName to always refer to the name within the class? For this we’ll want to ‘capture’ the this variable so that it never changes:

function myClass() {
    _this = this;
    this.name = 'My Class';

    this.getMyName = function() {
        return _this.name;
    };
}

By copying this into a local variable at instantiation, we can then bring it into the getMyName function via a closure. Now the output is as expected without any this shenanigans:

console.log(classInstance.getMyName()); // My Class

console.log(classInstance.getMyName.call()); // My Class

console.log(nameMethod()); // My Class

console.log(nameMethod.call(personInstance)); // My Class
Posted by Dan in Javascript, 0 comments

Alexa Skills with ASP.NET Core – Porting Reindeer Games

Making a custom Alexa skill is surprisingly easy, especially with the samples available from Amazon. I recently took the opportunity to port the NodeJS Reindeer Games project to .NET, and at the same time improve the code to something a little more readable and maintainable.

I’ve put the code up on my GitHub here: https://github.com/danclarke/ReindeerGamesNetCore

There’s a project for both a Web Service hosted on Azure and a Lambda Function hosted on AWS.

Hosting-wise a Lambda Function is easily the preferred solution simply due to latency. Calls from Alexa to an AWS Lambda are much faster than leaving the data centre and calling your remote web service.

If you do need to create a web service it must be SSL secured. This is easy in production, but a little harder during development. To make this easier, you might want to use something like Caddy with Let’s Encrypt.

Posted by Dan in Alexa, 0 comments

Read-Only Public Properties / State in C#

There are a few ways you may choose to implement a read-only property in C#. Some options are better than others, but how do you know which is the best? Let’s take a look at what you could potentially use and what the pros/cons are:

Naive approach

public class Reader
{
    private string _filename = "Inputfile.txt";

    public string Filename { get { return _filename; } }
}

At first glance, the code above should be ideal. It gives you an explicitly read-only property and correctly hides the internal backing variable. So, what’s wrong?

Internal variable is visible, as well as the property

This makes the code a little more brittle to changes. In the implementation for Reader, developers could use either _filename or the property Filename. If in the future, the code was amended to make the Filename property virtual, all usages of _filename would be immediate bugs. You always want the code to be as resilient as possible. Of course, you should always use the property in code, but mistakes are very easy to make and overlook. It’s much better to make it impossible to write incorrect code – ensuring you end up in the pit of success.

_filename is mutable!

While the property itself is read-only, the backing variable is mutable! Our implementation of Reader is free to change the property as much as it likes. This might be desirable, but generally, we want the state to be as fixed as possible.

Better approach, but not ideal

public class Reader
{
    private const string _filename = "Inputfile.txt";

    public string Filename => _filename;
}

The property accessor is now more concise, and the backing store is read-only. This is good, but it could be better.

Use of const is risky

What? const is risky? In this case, it could be. Const works by substituting all uses of the variable with the explicit value directly into the bytecode at compilation time. This means const is super-fast because there’s no memory usage and no memory access. The drawback is if the const value is changed, but an assembly referencing this assembly isn’t re-compiled. If this happens the other assembly will still see the ‘old’ value. In this example, this could happen if another assembly extended the Reader class it wouldn’t see changes to the private const variable unless it was also re-compiled. To make it even more confusing – it would likely ‘see’ the changes with Debug builds, but not with Release builds. The simple fix to this const difficulty is to use readonly instead of const for values that could change, such as settings. Genuine constants, such as PI, should remain as const.

Better again, but still not ideal

public class Reader
{
    private static readonly string _filename = "Inputfile.txt";

    public string Filename => _filename;
}

The code is getting a little more reliable here. The filename state variable is now static readonly so it’s memory usage is low, while still offering flexibility to changes – even if consuming assemblies aren’t recompiled with every change we make to our assembly.

As an aside, read-only state variables should be static wherever possible. If the variable isn’t static, it’ll be created and initialised with every single instance of the class – rather than just once across all instances.

Ideal approach

public class Reader
{
    public string Filename { get; } = "Inputfile.txt";
}

This is the best solution possible because it gives us maximum flexibility, with the minimum amount of potential issues going forward.

Good Locality

The initialisation happens with the property declaration ensuring high code locality. Before the property could be in a completely different area of the code file to the actual implementation of the private backing state.

No access to backing variable / state

The backing variable is now hidden from us, so we can’t accidentally use it instead of the property. If we make the property virtual, we can’t accidentally use the backing variable by mistake elsewhere in the code.

Easy to change to constructor initialisation

We can just remove the initialiser and put initialisation into the constructor. The property is still read-only, and the backing variable is still invisible. Easy!

Posted by Dan in C#, Programming, 0 comments

ASP.NET Identity 3 with EF is bloated and slow… let’s fix that

Since first using ASP.NET 5 and MVC 6 I’ve been fighting the EF-based Identity framework. I can’t do much about the rather obtuse API, but I can fix the constant moaning about migrations and terrible performance by replacing the EF data stores with something a little (OK a LOT) better. I’m a big fan of the new ‘micro’ ORMs out there like PetaPOCO, Dapper, Massive, etc. They’re extremely easy to use, and incredibly fast – almost as fast as manually typing out ADO.NET commands yourself but without the pain. At work I use NPOCO, which is an extended and in my opinion ‘better’ version of PetaPOCO.

While the API for ASP.NET Identity may be ‘odd’, Microsoft have made it quite easy to extend / change the framework. Removing the hard dependency on Entity Framework is a simple case of replacing the ‘UserStore’ and ‘RoleStore’ with your own implementations. Technically you only have to support the features you’re actually using. But it’s pretty easy to support everything but the ‘Queryable’ functionality, and you do this by implementation various interfaces such as:

  • IUserLoginStore<TUser>
  • IUserRoleStore<TUser>
  • IUserClaimStore<TUser>
  • IUserPasswordStore<TUser>
  • IUserSecurityStampStore<TUser>
  • IUserEmailStore<TUser>
  • IUserLockoutStore<TUser>
  • IUserPhoneNumberStore<TUser>
  • IUserTwoFactorStore<TUser>

Part of the reason the storage API is so over-engineered is so that the system can support both databases and remote services without impacting on performance too much. The disadvantage, ironically, is that performance is slightly lower if you’re using a traditional database as the data store. It also means when using Identity you have to explicitly ask for things, rather than being able to fetch a user and immediately see the available roles for example.

Microsoft have completely open-sourced Identity so you can see the default implementation over at GitHub here. I used this code as the basis for my own implementation. Of particular interest are the UserStore and RoleStore classes in the EntityFramework project.

I’ve open sourced my implementation and made it available on GitHub here. In short you lost almost no functionality, yet gain a staggering performance gain of up to 1,136%. You can easily transition from your current EF version to the new NPOCO version since the DB schema is identical (in fact I’ve just stolen MS’ schema for compatibility purposes). You can still use your own custom user and role classes as long as they inherit from IdentityUser and IdentityRole – just like with the MS EF version.

Posted by Dan in C#, Open Source, 0 comments

When to use an Interface and when to use a concrete type

When you first start working with Inversion of Control (might have to write a blog post on that…) & unit testing you’ll probably go interface crazy! Interfaces are great, and they really make it easy to unit test and take full advantage of inversion of control. However, sometimes you don’t want to use one. Fortunately if you’re using design patterns properly, it’s easy to decide if you need to use an interface or not:

  • Is the class just data storage? If so, it should always just be a concrete type
  • Is the class a service that does something? If so it should always have an interface

The problem is when you’ve got a class that’s mostly data storage but also has some logic to it. In this case you’ve broken ‘Separation of Concerns’ and now have a problem! In this case you’ll want to expose an interface simply to make testing easier. However, ideally you want to refactor the logic out into one or more new classes.

Posted by Dan in Programming, 0 comments

Encapsulation

It’s been a while since I’ve written anything, mostly because almost any ‘how to’ has already been written. So let’s start looking into the more difficult realm of architecture of the application rather than how to do this or that.

Encapsulation is one of the basic tenants of Object Oriented programming, and is often over-looked. In short encapsulation means hiding the internal functionality away from the consumer of your class. But what does that actually mean?

It means you’ve got some difficult decisions to make with each class you build! To make my point I’m going to use a class that accesses a database. In this class we’re going to save people, think of it like a contacts list. This ‘PeopleList’ is going to save people, and then later fetch people from a database. A naive approach might have the following methods:

  • InsertPerson
  • UpdatePerson
  • DeletePerson
  • GetPerson
  • GetPeople

Where has the encapsulation been lost? It’s been lost with the Insert/Update methods, both of which are shouting that you’re using a SQL database and you’re exposing the underlying requirements of the DB to the user of your class. The users of your class doesn’t care about the database you use, just that they want to persist people and then fetch them. Differentiating between inserts and updates is not relevant to the users of the class, and just adds additional work for them.

This is one of the most important things you must do each time you design a class – consider what does the user of the class want to do? You then make that task as easy as possible. You only write the class once, but this class might be used many thousands of times over many years. A better approach would be to have the following methods instead:

  • SavePerson
  • DeletePerson
  • GetPerson
  • GetPeople

Now your class is much simpler to use. Does the user have to consider if they have already saved the person anymore? No, now all they need to do is throw a person at your class and you worry about the details. You’ve encapsulated the functionality so that the user doesn’t have to worry about logic you should be worrying about.

The great thing about doing this, is the underlying data store of this class can now be changed without any changes to the public interface of the class. We could use web service instead, and you’re still handling the logic yourself. If the web services are well designed you’ve effectively reduced the query count by one since the user of your class won’t have to check if the person exists already or not. Not only is it easier to use your new class, but it’s faster too!

Coming soon: Encapsulation Part 2!

Posted by Dan in Guides, Programming, 0 comments

‘env: node: No such file or directory’ Error in IntelliJ IDEA File Watcher

I was getting the following error when using the File Watcher feature in IntelliJ Idea to compile Typescript files:

/usr/local/bin/tsc –verbose –sourcemap test.ts
env: node: No such file or directory

The solution is relatively simple, you need to manually set the PATH environment variable in the File Watcher like so:

PATH variable in Idea File Watcher

 

You can get your environment’s PATH variable from ~/.MacOSX/environment.plist if you’re rocking a Mac. Alternatively you can probably use this one (again for a Mac):

/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/opt/local/bin

Posted by Dan in Guides, Mac, 6 comments

BNS Error: The action request has expired

When using Background Agents in Windows Phone you may experience an InvalidOperationException with the message ‘BNS Error: The action request has expired‘ like so:

BNS Error: The action request has expired

This error means that your background task either threw an exception or failed to call NotifyComplete(). This error will not go away once you’ve fixed the issue – the background task must be removed. You can either do this in code, or uninstall the app and install it again.

Posted by Dan in Windows Phone, 0 comments

InvalidOperationException and FileNotFoundException with Background Tasks in Windows Phone

There are a few Exceptions Windows Phone throws when a Background Task isn’t correctly configured, sadly they’re very misleading. In this blog post I’ll outline some of the common exceptions you’ll get and how to go about fixing them.

Failure to Run

When you call ‘ScheduledActionService.LaunchForTest’ you may get an InvalidOperationException like so:

InvalidOperationException on LaunchForTest

This Exception means you haven’t added the ExtendedTask info in the WMAppManifest.xml file (found in ‘Properties’). Open the file in the XML editor (right-click on it and select ‘Open With’), and make sure you’ve entered the ExtendedTask info:

<Tasks>
  <DefaultTask  Name ="_default" NavigationPage="MainPage.xaml"/>
  <ExtendedTask Name="BackgroundTask">
    <BackgroundServiceAgent Specifier="ScheduledTaskAgent" Name="TileUpdateAgent" Source="WP7LiveTileDemo" Type="WP7LiveTileDemo.Agents.TileUpdateAgent" />
  </ExtendedTask>
</Tasks>

The values are as follows:

  • Specifier – The type of background agent, you can only have one of each type
  • Name – Name for this task, can be anything you like
  • Source – The name of the Assembly (without the .dll extension) that contains the agent. If this is wrong you’ll get both InvalidOperationException and FileNotFoundException exceptions
  • Type – The full class name (ie. must include the namespace path too) of the agent you set up in code. If this is wrong you’ll get an InvalidOperationException

InvalidOperationException and/or FileNotFoundException

If anything is wrong with the ExtendedTask declaration you’ll get InvalidOperationException or FileNotFoundException being thrown from Microsoft.Phone.ni.dll, it’ll look something like this:

InvalidOperationException from a BackgroundTask in WP InvalidOperationException stack trace from a BackgroundTask in WP

Make sure to double-check your ExtendedTask XML is perfect, especially the Source and Type attributes.

Posted by Dan in Windows Phone, 1 comment