Month: March 2017

Pre-compress static web files using GZip and Brotli automatically

If you’ve worked with the web for any amount of time, you’ll know that compression is one of the very best ways of improving page load times. You might also be annoyed by the fact that you’re wasting CPU cycles on compressing the same files over and over – not to mention the added latency waiting for the compression to complete. The best option is to pre-compress all of your static files as part of the build or deploy process of your web application. For just this requirement, I’ve created a small Node script that’ll recurse through a directory compressing all of the files it locates.

The Code

You can find the Gist here: https://gist.github.com/danclarke/7a5b647d38a63241b71fb3743db15160

Simply update the last line to point to the directory you want to compress. By default, I’ve got the script compressing ‘dist’.

compressDir('dist');

Nginx

Next, you’ll need to configure your web server to use the pre-compressed files instead of compressing them on the fly. For Nginx you’ll need to do the following:

Then add the following lines to your Nginx config for either the http configuration or the location configuration:

  • gzip_static on;
  • brotli_static on;

And that’s it!

Docker

If you want to use Nginx in a Docker container with Brotli, you can use this very cool Github project: https://github.com/fholzer/docker-nginx-brotli. Then in the Dockerfile for your website, ensure you use your custom Nginx image instead of the official one.

My configuration for Nginx looks like the following:

server {
    listen       80;
    server_name  localhost;

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
        gzip_static   on;
        brotli_static on;
    }
}
Posted by Dan in Programming, 0 comments

Azure Blob Upload Speed – Don’t use OpenWriteAsync()

Uploading to Azure Storage with the .NET Client can often require some customisation to ensure acceptable performance. First, let’s look at some options in BlobRequestOptions:

SingleBlobUploadThresholdInBytes

The minimum size of a blob before it’ll be uploaded in ‘chunks’. This only works for the non-stream based upload methods.

Minimum: 1MB, or 1,048,576 bytes.

ParallelOperationThreadCount

The maximum number of upload operations to perform in parallel, for a single blob.

There’s also a useful property on the Blob itself:

StreamWriteSizeInBytes

The size of each block to upload. So, for example, if you set to 1MB, a 4MB file will be chunked into 4 separate 1MB blocks.

Default value: 4MB, or 4,194,304 bytes.

// Options
var options = new BlobRequestOptions
{
    SingleBlobUploadThresholdInBytes = 1024 * 1024, //1MB, the minimum
    ParallelOperationThreadCount = 1
};

client.DefaultRequestOptions = options;

// Blob stream write
blob.StreamWriteSizeInBytes = 1024 * 1024;

A more thorough explanation is available here: https://www.simple-talk.com/cloud/platform-as-a-service/azure-blob-storage-part-4-uploading-large-blobs/

When it’s all ignored – OpenWriteAsync()

You can set all of these options, but if you use blob.OpenWriteAsync() it’s going to upload files in 5KB chunks as you write to the stream. This will absolutely destroy performance if you’re uploading larger files or a lot of files. Instead, you’ll need to use the blob.UploadFromStreamAsync() method:

// Buffer to a memory stream so that the client uploads in one chunk instead of multiple
// By default the client seems to upload in 5KB chunks
using (var memStream = new MemoryStream())
{
    // Save to memory stream
    await saveActionAsync(memStream);

    // Upload to Azure
    memStream.Seek(0, SeekOrigin.Begin);
    await blob.UploadFromStreamAsync(memStream);
}

If you use the UploadFromStreamAsync() method, the settings you set will be honoured and blobs will be uploaded in a much more efficient manner.

Posted by Dan in Azure, C#, 3 comments