Webpack with Angular2

In a previous post I spoke about the configurations to manage bundles in angular 2 with Systemjs.

For this purphose, the latest versions of Angular CLI uses webpack and we can use it as module bundler in the new angular 2/4 applications.

The reason of this choice is that webpack has a lot of plugins that they help us with a refined management of the modules bundling.

Entries

When we configure webpack, the first thing that we need to do is define the entries of the bundles.

Every bundle has the main entry point, that is the module where webpack start to build the bundle hierarchy.

I like to build two different bundle, the first one for the vendors libraries and the second one for the application modules.

The  main entry for the vendors is a typescript class that looks like this:


import '@angular/platform-browser';
import '@angular/platform-browser-dynamic';
import '@angular/core';
import '@angular/common';
import '@angular/http';
import '@angular/router';
import '@angular/animations';

import 'jquery';
import 'lodash';
import 'ng2-toastr';
import 'ng2-translate';
import 'reflect-metadata';
import 'zone.js';
import 'rxjs';

The second one is the main module for the app:


export { PostModule } from "./post.module";

Webpack is able to load these modules and build the dependencies of the childs descendans, and process them.

Loaders

Loaders are specific webpack tools that are able to process specific file types and to produce the right bundle outputs.

For example, in order to make webpack to be able to resolve the angular html templates or the router modules, or more simply to compile typescript files, specific loaders are needed.

Below you can see a bunch of loaders used in an Angular 2 application:


module: {
rules: [
{
test: /\.ts$/,
loaders: [
{
loader: 'awesome-typescript-loader',
options: { configFileName: 'tsconfig.json' }
}, 'angular2-template-loader', 'angular2-router-loader'
]
},
{
test: /\.html$/,
loader: 'html-loader'
},
{
test: /\.(png|jpe?g|gif|svg|woff|woff2|ttf|eot|ico)$/,
loader: 'file-loader?name=images/[name].[hash].[ext]'
},
{
test: /\.css$/,
exclude: helpers.root('src', 'app'),
loader: ExtractTextPlugin.extract({ fallbackLoader: 'style-loader', loader: 'css-loader?sourceMap' })
},
{
test: /\.css$/,
include: helpers.root('src', 'app'),
loader: 'raw-loader'
}
]
}

Plugins

These are utilities that we can help us in some additional tasks of the webpack script.

For example, we would like to share a common chunk and we would want that the chunk, if used from different module of the application, is not repeated in the bundle but is present only once.

CommonsChunkPlugin deal with this work:


plugins: [

.....

new webpack.optimize.CommonsChunkPlugin({
name: ['vendor']
}),

.....

]

Another requirement could be write the script references in the index.html file of the application.

HtmlWebpackPlugin can help us in this work:


plugins: [

.....

new HtmlWebpackPlugin({
filename: 'index.html',
template: 'src/index.html',
chunks: ['app', 'vendor']
}),

new HtmlWebpackPlugin({
filename: 'index.admin.html',
template: 'src/index.admin.html',
chunks: ['app.admin', 'vendor']
}),

.....
]

Output

In this section we can configure the file results of the webpack process.

We have to produce these files in the public folder of the application; the output section is look like this:


output: {
path: helpers.root('wwwroot'),
filename: 'js/[name].js',
chunkFilename: 'js/[id].chunk.js'
}

 

You can find the source code here.

 

Webpack with Angular2

Manage attachments chunks with ASP.NET Web Api

In the previous post I spoke about a custom MultipartFormData stream provider and how it can help us to manage some custom informations included in a request message.

In that example I generated chunks form a file and I sent those to a rest service (AKA Web API) with some additional informations that were then retrieved from the custom provider.

Now I want to use these informations to manage the upload session and merge all the chunks when received.

What I need to do is define the models involved in the process and the service that manage the chunks.

Models

We have to define two stuff, the first one is the model for the chunk:


public class ChunkMetadata
{
public string Filename { get; set; }
public int ChunkNumber { get; set; }

public ChunkMetadata(string filename, int chunkNumber)
{
Filename = filename;
ChunkNumber = chunkNumber;
}
}

The ChunkNumber property deserves an explanation; is the number associated to the chunk and will be useful to understand the correct order when we’ll have to merge all of them.

The second one is the model of the session, that is the bunch of the chunks that compose the file.

First of all we define the interface:


public interface IUploadSession
{
ConcurrentBag<ChunkMetadata> Chunks { get; set; }
string Filename { get; }
long Filesize { get; }
bool AddChunk(string filename, string chunkFileName, int chunkNumber, int totalChunks);
Task MergeChunks(string path);
}

The FileName and Filesize are closely tied to the session; we need AddChunk and MergeChunks methods as well.

We also need a thread safe collection for the chunks that compose the session, so we define a CuncurrentBag collection, that is the thread safe representation of the List.

Now we can implement the model:


public class UploadSession : IUploadSession
{
public string Filename { get; private set; }
public long Filesize { get; private set; }
private int _totalChunks;
private int _chunksUploaded;

public ConcurrentBag<ChunkMetadata> Chunks { get; set; }

public UploadSession()
{
Filesize = 0;
_chunksUploaded = 0;
Chunks = new ConcurrentBag<ChunkMetadata>();
}

public bool AddChunk(string filename, string chunkFileName, int chunkNumber, int totalChunks)
{
if (Filename == null)
{
Filename = filename;
_totalChunks = totalChunks;
}

var metadata = new ChunkMetadata(chunkFileName, chunkNumber);
Chunks.Add(metadata);

_chunksUploaded = Interlocked.Increment(ref _chunksUploaded);
return _chunksUploaded == _totalChunks;
}

public async Task MergeChunks(string path)
{
var filePath = path + Filename;

using (var mainFile = new FileStream(filePath, FileMode.Create))
{
foreach (var chunk in Chunks.OrderBy(c => c.ChunkNumber))
{
using (var chunkFile = new FileStream(chunk.Filename, FileMode.Open))
{
await chunkFile.CopyToAsync(mainFile);
Filesize += chunkFile.Length;
}
}
}

foreach (var chunk in Chunks)
{
File.Delete(chunk.Filename);
}
}
}

The implementation is quite simple.

The AddChunk method add the new chunk to the collection, then increment the _chunksUploaded property with the thread safe operation Interlocked.Increment; at the end, the method returns a bool that is true if all the chunks are received, otherwise false.

The MergeChunks method deal with the retrieve of all the chunks from the file system.

It gets the collection, order by the chunk number, read the bytes from the chunks and copy those to the main file stream.

After all, the chunks are deleted.

Service

The service will have an interface like this:


public interface IUploadService
{
Guid StartNewSession();
Task<bool> UploadChunk(HttpRequestMessage request);
}

In my mind, the StartNewSession method will instantiate a new Session object and assign a new correlation id that is the unique identifier of the session.

This is the implementation:


public class UploadService : IUploadService
{
private readonly Context _db = new Context();
private readonly string _path;
private readonly ConcurrentDictionary<string, UploadSession> _uploadSessions;

public UploadService(string path)
{
_path = path;
_uploadSessions = new ConcurrentDictionary<string, UploadSession>();
}

public async Task<bool> UploadChunk(HttpRequestMessage request)
{
var provider = new CustomMultipartFormDataStreamProvider(_path);
await request.Content.ReadAsMultipartAsync(provider);
provider.ExtractValues();

UploadSession uploadSession;
_uploadSessions.TryGetValue(provider.CorrelationId, out uploadSession);

if (uploadSession == null)
throw new ObjectNotFoundException();

var completed = uploadSession.AddChunk(provider.Filename, provider.ChunkFilename, provider.ChunkNumber, provider.TotalChunks);

if (completed)
{
await uploadSession.MergeChunks(_path);

var fileBlob = new FileBlob()
{
Id = Guid.NewGuid(),
Path = _path + uploadSession.Filename,
Name = uploadSession.Filename,
Size = uploadSession.Filesize
};

_db.FileBlobs.Add(fileBlob);
await _db.SaveChangesAsync();

return true;
}

return false;
}

public Guid StartNewSession()
{
var correlationId = Guid.NewGuid();
var session = new UploadSession();
_uploadSessions.TryAdd(correlationId.ToString(), session);

return correlationId;
}
}

In the StartNewSession method we use the thread safe method TryAdd to add a new session to the CuncurrentBag.

About the UploadChunk method, we seen the first part of the implementation in the previous post.

Once the metadata is retrieved from the request, we try to find the session object with a thread safe operation.

If we don’t find the object, of course we need to throw an exception because we expect that the related session exists.

If the session exists, we add the chunk to the session and we check the result of the operation.

If is the last chunk, we merge all of them and we can do a database operation if needed.

Controller

The implementation of the controller is very simple:


public class FileBlobsController : ApiController
{
private readonly IUploadService _fileBlobsService;
private readonly Context _db = new Context();

public FileBlobsController(IUploadService uploadService)
{
_fileBlobsService = uploadService;
}

[Route("api/fileblobs/getcorrelationid")]
[HttpGet]
public IHttpActionResult GetCorrelationId()
{
return Ok(_fileBlobsService.StartNewSession());
}

[HttpPost]
public async Task<IHttpActionResult> PostFileBlob()
{
if (!Request.Content.IsMimeMultipartContent())
throw new Exception();

var result = await _fileBlobsService.UploadChunk(Request);

return Ok(result);
}
}

You can find the source code here.

Manage attachments chunks with ASP.NET Web Api