Looking deep inside OData Controllers

ASP.NET 4.5 has three different class which we can inherits to implements the controllers: Controller, ApiController and ODataController.

In the latest applications that I implemented, because they were SPA, I used rarely the classic Controller (know as MVC controllers) and I developed a huge number of ApiController and ODataController.

The first ones are used to implements the WEB APIs and the second ones if we want a controller that implements OData protocol.

In this post I want to talk about the main architectural differences between these two typologies and the different behaviour inside the source code.

ApiExplorer

The first difference between them is that the ODataController class has the ApiExplorer setting IgnoreApi=true; this is useful to disable the inspection and don’t retrieve the list of the action to the clients that request it:


[ApiExplorerSettings(IgnoreApi = true)]

This is correct when we talk about of libraries like Swagger, that inspect the entire list of the Web APIs in the application and exposes the documentation about those.

With this default setting, the program by default won’t show the documentation for these controllers.

ODataFormattingAttribute

All the OData Controllers has a custom ODataFormattingAttribute applied by default.

This attribute deal with the parsing of the url parameters and send the results in the correct format.

For example, it’s able to parse the OData $format query option (used to specify the format of the results), format the results like xml, json or raw as well as read the OData Metadata header parameter in the request and define the data to be send.

This attribute override the default action value binder with a PerRequestActionValueBinder and the default negotiator with a PerRequestContentNegotiator as well;  these actions have a specific action selector for the OData requests and a negotiator that uses a per-request formatter for the content negotiation.

ODataRoutingAttribute

The main duty of the ODataRoutingAttribute is to override the default action selector with the specific ODataActionSelector, that meet with the routing conventions and formats.

This selector retrieves the path of the request by using a class ODataPath, that represents the path with additional informations about the Edm type and the entity set.

It retrieves the odata routing conventions from the request as well.

Then, with the path and the conventions it retrieves the action to execute in the controller context.

The last duty is instantiate an ODataProviderValueFactory, a custom class that extendes the ProviderValueFactory that parse and retrieves the routing conventions from the OData url.

Results

The last two stuff added to the ODataController are two methods that send the results to the client.

These methods uses a CreatedODataResult class that format the response according with the OData standards.

Then the header with the location informations is added and the content with the informations about EntitySet, Model, Url is created.

 

 

 

 

Advertisements
Looking deep inside OData Controllers

Upgrade an Angular 2 application to Angular 4

One years ago I wrote a series of posts about Angular 2 and how we can develop a base application with this new framework.

The time is gone and during this period many releases came; the last Angular 4 version is 4.4.6 and a new stable version of the framework has been released in version 5.

So I decided to upgrade this base application to the version 4 and check what are the main differences.

I choose to not use angular-cli but to upgrade the application manually, so I didn’t have to create a new project.

In the upgrade process I met some changes that I describe below.

Packages

The first thing that I have done was upgrade the package.json file of my old application.

{
"name": "angular4.web",
"version": "0.0.0",
"license": "MIT",
"private": true,
"dependencies": {
"@angular/animations": "^4.4.6",
"@angular/common": "^4.4.6",
"@angular/compiler": "^4.4.6",
"@angular/core": "^4.4.6",
"@angular/forms": "^4.4.6",
"@angular/http": "^4.4.6",
"@angular/platform-browser": "^4.4.6",
"@angular/platform-browser-dynamic": "^4.4.6",
"@angular/router": "^4.4.6",
.....
}
.....
}

With “npm install” command I have upgraded the angular packages without particular problems and the source code of the application was compliant to the new framework version.

The only problem that I have encountered was on the translate module, that is moved from ng2-translate to ngx-translate.

Translate factory

In the package.json I have replaced the new package like this:


{
"dependencies": {
.....
"@ngx-translate/core": "^8.0.0",
.....
}
}

Then I replaced all the modules that depended on it with the new reference:


import { TranslateModule } from '@ngx-translate/core';

.....

Fortunately, the service and methods of the new packages were the same of ng2-translate package, so I didn’t had to change the code.

ngIf directive refactoring

Once the application worked I tried to leverage some new features of Angular 4.

In detail, I used the ngIf directive in order to simplify the html code of a couple of page.

For example, the invoice.component.html file, that had this structure, now looks like this:

<div *ngIf="!edit; else editBlock">
<input type="button" class="btn btn-primary" value="New" (click)="New()" />
<table class="table table-striped">
<thead>
<tr>
<th>
{{ "NUMBER" | translate }}</th>
<th>
{{ "CUSTOMER" | translate }}</th>
<th>
{{ "EMISSIONDATE" | translate }}</th>
</tr>
</thead>
<tbody>
<tr *ngFor="let invoice of (invoices | searchInvoices: searchService.searchText)" (click)="Edit(invoice)">
<td>
{{invoice.Number}}/{{invoice.Year}}</td>
<td>
{{invoice.Customer.Name}}</td>
<td>
{{invoice.EmissionDate | date: "dd/MM/yyyy"}}</td>
</tr>
</tbody>
</table>
</div>
<ng-template #editBlock>
<app-invoice-detail *ngIf="invoice" [invoice]="invoice" [isNew]="newInvoice" [validationEnabled]="invoiceValidationEnabled"
(onClosed)="onClosed($event)" (onDeleted)="onDeleted($event)"></app-invoice-detail>
</ng-template>

I removed the hidden attribute from the original page and I used the ngIf directive to manage the visibility of the detail block.

You can find the Angular 4 application here.

Upgrade an Angular 2 application to Angular 4

Build and run an ASP.NET Core application with Docker

The topic of this post is a new technology that widespread in the world of the virtualization, that is Docker.

It is based on the concept of the containers, every application that runs on docker is a container, that is a lightweight executable package that contains all the dependencies that the application needed to run.

For instance, an ASP.NET Core application container will contains, among other things, the Core Framework.

This approach give us the advantage of deploy the application without take care about the configurations of the hosting environment; furthermore, the container is lighter and easier to manage than a traditional virtual machine and is preferred when we want to manage applications separately.

If we want to configure an ASP.NET Core application with Docker, we have some steps that we need to do.

Docker installation

The first thing is install the docker environment to run the containers.

We can refer to this page to choice the right installation; in my case I have a Windows Home desktop, so i need to install the Docker Toolbox instead of Docker for Windows.

The installation is very simple and, based on your windows version, if you don’t have Hiper-V installed (used from docker to create local machines), will be installed VirtualBox as well.

Once the installation is completed, we can run the docker environment,  the terminal will be showed:

docker1

Docker configuration file

Once the environment is ready we have to configure our application and we have to add the Dockerfile in the root of the application.

It’s like a configuration file where we says to Docker the instructions to build the container:

FROM microsoft/aspnetcore-build AS builder
WORKDIR /source

COPY *.csproj .
RUN dotnet restore

COPY . .
RUN dotnet publish --output /app/ --configuration Release

FROM microsoft/aspnetcore
WORKDIR /app
COPY --from=builder /app .
ENTRYPOINT ["dotnet", "WebApp.dll"]

The file is composed by two parts, the first one deal with the build of the project; with the instruction FROM I say what is the base image for my container, that is the Microsoft image with the ASP.NET Core build environment; this container has all the stuff that we needed to build a project, like the .NET Core Framework.

After we have defined the WORKDIR I copy the csproj in the root of the container path (the second parameter “.” means root path) and then I run the dotnet restore command.

Next I copy the rest of the project and then I run dotnet publish command with the app folder as a parameter.

Now it’s the time of the second part of the configuration file, that is the configuration of the startup of the application.

In this case, the startup image is aspnetcore, that has the ASP.NET Core runtime already installed; I define the app folder as workdir and I copy the built package to this folder.

The last row is the entry point of the container, that is the startup command to be executed; in my case I startup my web application.

Build and run

Now I’m ready to prepare and run my app in a Docker container.

The first thing that I need to do is build the application with a docker command.

So I go to the root on the web application and I run this command:

docker build -t webapp .

This command build the docker image from which the containers will be generated.

The start point for every container is an image that the docker daemon will use to build a container when requested.

If the command end successfully we are ready to run a new container from that image:

docker run --detach -p 5000:80 --name webapp_container webapp

The command is quite simple, with the option detach we says to docker that we don’t need an interactive console of the container, so we haven’t to manage it; with the p parameter we define the port exposed to the outside and the port which the other containers can use to comunicate; the name parameter is the name associated to the container.

After we have executed the command, we can check the running containers with this command:

docker ps -a

The result of this command should looks like this.

docker2

In the first row I have the container up and run, but before to access the application from a web browser, we need to retrieve the ip of the docker virtual machine.

We can do this with the command:

docker-machine ip

Now I can type in my browser:

http://ipaddress:5000

And the web application responds!

That’s all, we I have installed docker on my machine, configured the dockerfile for my application, built the image and run the docker container.

 

Build and run an ASP.NET Core application with Docker

Managing OAuth 2 authentication with Swagger

In this post I want to talk about a product that could help us to produce documentation about the Web API services implemented in our application.

Swagger is a popular framework that once installed in an ASP.NET application is able to produce documentation about all the Web API implemented in the project.

Furthermore it give us a lot of flexibility and is possible to add some custom filters in order to change the standard behaviours; for example add the OAuth authentication management for the protected applications.

So let’s go to view the steps necessaries to install and configure this framework.

Configuration

The first step is install the package in our project and we can do that with nuget:


Install-Package Swashbuckle -Version 5.6.0

Now we need to add the Swagger configuration in the startup.cs file:


config.EnableSwagger(c =>
{
c.SingleApiVersion("v1", "BaseSPA.Web");
c.OperationFilter<SwaggerFilter>();
c.PrettyPrint();
c.IgnoreObsoleteActions();
}).EnableSwaggerUi();

In the configuration we define the description of the project, we says that we want json in prettify format and we want to ignore obsolete actions.

Furthermore we register a SwaggerFilter, that is a custom filter used to manage the OAuth authentication.

This is the SwaggerFilter implementation:


public class SwaggerFilter: IOperationFilter
 {
 public void Apply(Operation operation, SchemaRegistry schemaRegistry, ApiDescription apiDescription)
 {
 var toBeAuthorize = apiDescription.GetControllerAndActionAttributes<AuthorizeAttribute>().Any();

if (toBeAuthorize)
 {
 if (operation.parameters == null)
 operation.parameters = new List<Parameter>();

operation.parameters.Add(new Parameter()
 {
 name = "Authorization",
 @in = "header",
 description = "bearer token",
 required = true,
 type = "string"
 });
 }
 }
 }

First of all we need to implement the IOperationFilter interface with the Apply method.

In this method we check the actions protected with the Authorize attribute; for these, we add a new Authorization parameter that we’ll be showed in the Swagger UI and will be used to set the bearer token.

Test Web API

After compiling the project, we can access the url of the application and append the term swagger at the end of that, like this:


http://localhost/swagger

This is what will be showed:

swagger2

If we open one of the actions available, we notice the Authorization attribute that we have configure above:

swagger5

Now what we need is the bearer token to send to the action and we can retrieve it with Postman; we have to send a post request to the token endpoint configured in the application like this:

swagger1

I send a request with my username, password and grant_type keys and I specify the content type as x-www-form-urlencoded as well.

The response contains the access_token and I can copy it in the field in the swagger UI:

swagger3

That’s all, by sending the request, the application recognize me and it send me the response.

You can find the source code here.

 

 

 

 

 

Managing OAuth 2 authentication with Swagger

Consume Web API OData with ODataAngularResources

One month ago, I wrote this post about the odata services and how we can consume those with the $http factory of Angularjs.

This approach is not wrong but is not smart at all, because we have to write every query for any call that the application needs to do, and is not flexible.

What we could do is write our custom library that deal with the odata url format and it expose some fluent methods that the components of the application can call; in this way we have a layer between the angular services and the odata controller and the code should be more readable.

Anyway, write a library like that is not very easy and require a lot of amount of time.

The solution of these problems is a library already implemented that we can use directly in our angular application, that is ODataAngularResources.

This is a lightweight library that allow writing the OData queries with fluent methods.

What I’ll go to do is leverage this library for the queries and implement a custom service to manage the entities saving.

ODataResource factory

The first factory that I want to implement is an object that wrap the basic ODataResources module and expose the CRUD operations of the entity.

First of all, I need to install the library, and you can refer to the GitHub project for any detail.

Than, I can start with the implementation and creating the new module:


(function(window, angular) {
'use-strict';
angular.module('odataResourcesModule', ['ui.router', 'ODataResources'])

.factory('odataResource', function ($odataresource, $http, $q) {

function odataResource(serviceRootUrl, resourcePath, key) {
this.serviceRootUrl = serviceRootUrl;
this.resourcePath = resourcePath;
this.key = key;

this._odataResource = $odataresource(serviceRootUrl + '/' + resourcePath, {}, {}, {
odatakey: key,
isodatav4: true
});

angular.extend(this._odataResource.prototype, {
'$patch': function () {
var defer = $q.defer();

var req = {
method: 'PATCH',
url: serviceRootUrl + '/' + resourcePath + '(' + this.Id + ')',
data: this
};

$http(req).then(function (data) {
defer.resolve(data);
}, function (error) {
defer.reject(error);
});

return defer.promise;
}
});
}

.....

The factory has a constructor where I pass three parameters, the ServiceRootUrl that is the first part of the url (in my case odata), the resourcePath that identify the entity (Blogs) and the key of the table.

Then I configure the $odataresource with these parameters and I setup the odata v4 as well.

At the end I extend the OdataResource prototype because this library doesn’t provide the implementation for the PATCH verb, I need to implement it with my custom function.

So far so good, now I want to expose some methods to queries the resource or retrieve a single entity:


odataResource.prototype.getResource = function () {
return this._odataResource.odata();
}

odataResource.prototype.get = function (id) {
var defer = $q.defer();

this._odataResource.odata().get(id, function (data) {
data._originalResource = angular.copy(data);
defer.resolve(data);
}, function (error) {
defer.reject(error);
});

return defer.promise;
}

With the getResource method I implement a getter for the resource and allow the caller to execute the queries.

The get method deserves a particular explanation, once the single entity is retrieved, I make copy of the original entity and assign that to a new private property named _originalResource.

This will allow me two check the values that will be changed on the entity, and compose the object that will be sent with the patch method.

Now I can implement the others methods to add, update and delete the entity:


odataResource.prototype.new = function () {
return new this._odataResource();
}

odataResource.prototype.add = function (resource) {
var defer = $q.defer();

resource.$save(function (data) {
data._originalResource = angular.copy(data);
defer.resolve(data);
}, function (error) {
defer.reject(error);
});

return defer.promise;
}

odataResource.prototype.update = function (resource) {
var defer = $q.defer();

var self = this;
resource.$patch().then(function () {
self.get(resource[self.key]).then(function (data) {
defer.resolve(data);
});
}, function (error) {
defer.reject(error);
});

return defer.promise;
}

odataResource.prototype.delete = function (resource) {
var defer = $q.defer();

resource.$delete(function (data) {
defer.resolve(data);
}, function (error) {
defer.reject(error);
});

return defer.promise;
}

Like the get method, I do a copy of the entity in the add method as well.

Now I have a factory that is able to query Web API OData services and manage the crud operations and I can proceed with some business logic.

ODataGenericResource factory

The first step is define the new factory:


.factory('odataGenericResource', function ($q, odataResource) {

function odataGenericResource(serviceRootUrl, resourcePath, key) {
this.odataResource = new odataResource(serviceRootUrl, resourcePath, key);
}

In the constructor I create a new instance of the ODataResource factory, in order to leverage the methods implemented above.

Based on the id parameter, now I implement a get method that will create a new resource or perform a get from a Web API OData service:


odataGenericResource.prototype.get = function (id) {
if (id === '') {
var defer = $q.defer();
defer.resolve(this.odataResource.new());

return defer.promise;
} else {
return this.odataResource.get(id);
}
}

If an external service will call this method with an empty id will receive a new resource, otherwise will receive the entity, if exists.

Now I need to implement the save method:


odataGenericResource.prototype.isChanged = function (resource) {
var isChanged = false;
for (var propertyName in resource) {
if (isEntityProperty(propertyName) && resource._originalResource[propertyName] !== resource[propertyName]) {
isChanged = true;
}
}

return isChanged;
}

odataGenericResource.prototype.getObjectToUpdate = function (resource) {
var object = this.odataResource.new();
object[this.odataResource.key] = resource[this.odataResource.key];

for (var propertyName in resource) {
if (isEntityProperty(propertyName) && resource._originalResource[propertyName] !== resource[propertyName]) {
object[propertyName] = resource[propertyName];
}
}

return object;
}

odataGenericResource.prototype.save = function(resource) {
if (!resource._originalResource) {
return this.odataResource.add(resource);
} else if (this.isChanged(resource)) {
var object = this.getObjectToUpdate(resource);
return this.odataResource.update(object);
} else {
var defer = $q.defer();
defer.resolve(resource);

return defer.promise;
}
}

I need some methods, to check the state of the entity.

The IsChanged method check if any property of the entity is changed, leveraging the _originalResource property.

The second method prepare the effective object to save, with only the properties that has been changed.

The save method check if the entity is new (it doesn’t have the _originalResource property) or not; based on that, it will be added or saved.

The delete method doesn’t have particular stuff and we can now take a look at the using of this factory in the application.

Application

I can use this factory in any angular module that needs to manage crud operations for an entity.

For example, in a blogsModule, I can define a factory like this:


(function (window, angular) {
'use-strict';
angular.module('blogsModule', ['ui.router', 'odataResourcesModule'])
.factory('blogsService', function ($http, odataGenericResource) {
return new odataGenericResource('odata', 'Blogs', 'Id');
})

And now in the controller:


.controller('blogsCtrl', function ($scope, $state, blogsService) {

$scope.new = function() {
$state.go("home.blog", { id: null });
};

$scope.detail = function(id) {
$state.go("home.blog", { id: id });
};

$scope.Blogs = blogsService.getOdataResource().query();
})
.controller('blogsDetailCtrl', function ($scope, $state, blogsService) {
var load = function (id) {
blogsService.get(id).then(function(data) {
$scope.Blog = data;
});
};

$scope.save = function () {
blogsService.save($scope.Blog).then(function(data) {
load(data.Id);
});
}

$scope.delete = function () {
blogsService.delete($scope.Blog).then(function () {
$scope.close();
});
};

$scope.close = function () {
$state.go("home.blogs");
};

load($state.params.id);
});

With few lines of code I can manage the calls to a Web API OData service, with the possibility do fluent odata queries.

You can find the source code here.

Consume Web API OData with ODataAngularResources

Continuous integration with Atlassian Bamboo

The continuous integration is a frequent theme in the application development, expecially in the projects with many developers involved that work on differents components of the application.

Just because the developers works on different features, a process that integrate the new code frequently in project and verify that all the dependencies of that are working yet is recommended.

A practice to integrate daily the new code in the shared project repository avoid problems like integration hell, where a developer has troubles to integrate code of another developer, that worked isolated for a long period.

Again, a process that manage this integration allow us to introduce in this phase the automatic tests and discover eventually regressions in the code modified.

There are different products to manage the continuous integration process and one of those is Bamboo, a product owned by Atlassian and in this post I want to talk about the configuration of the continuous integration on a .NET project with a bunch of tests implemented with NUnit.

Server configuration

Before to proceed with a project configuration we need to take care about the executables that Bamboo will needs to execute the process.

In my case I need the MSBuild and NUnit executables and I can configure them in the server capabilities section of the Bamboo configuration:

configuration7

In my case I use the MSBuild installed with Visual Studio, but you can download that from this url.

The NUnit console is the executable used for the tests execution and you can find it here.

Plan

When I need to configure a new project in Bamboo, I need to do a new plan.

A plan is composed by one or more stages; for example in my case I have a build stage, a test stage and a deploy stage.

In the stages I will create the jobs that will do some stuff.

In the plan configuration I need to take care about the referenced repository, that is the repository where Bamboo will load the source code:

configuration5

In my case I have specified the url of a GitHub repository.

Another thing that I can do is define a trigger for the plan, that is an event after which the plan execution will be fired.

I define a branch that Bamboo will polling for new changes:

configuration6

Every time a new commit will be pushed on the master branch, the plan will be executed.

Build

Now I configure the Solution Build job in the Build stage; every job is composed by one or more tasks and in this case I have three tasks.

The first one is the Source Code Checkout, that is the phase where Bamboo get the source code from the repository and copy all the content in his working directory (locally on the server where is installed).

The second one is a Script configuration, in my case I need to restore some nuget packages before to build the project; I do that by writing a script that launch the nuget executable with the option restore:

configuration8

You can find the nuget executable download here.

The last step is the MSBuild task and we use the executable configured above:

configuration9

Another thing that we can do in the job configuration is define an artifact, thus the output of the job will be available for the next jobs:

configuration10

In this way, we speedup the next job because it won’t need to checkout the repository again.

Test

The test job is more simple, because it has the source code already available and it has only the test runner:

configuration11

Again here we use the other executable configured above and we have only to specify the dll of the test project.

We don’t forget to configure the artifact for the job and use that produced in the previous job:

configuration12

Deploy

The last stage is the deploy.

To have a clean situation I do again a Source Code Checkout and than I  execute a powershell script:

configuration13

The script was explained in the previous post and deal with some stuff like build, apply a version to the assembly and copy the result in a shared directory.

That’s all, when you’ll run the plan, the result (if the project is ok :))) will looks like this:

configuration4

 

Continuous integration with Atlassian Bamboo

Deploy a .NET project with powershell and git

In this topic I want to share my experience in writing a powershell script to compile and publish a .NET project.

I use Git and a GitHub repository, so in my mind the script should restore the nuget packages, build the project with MSBuild, take the last tag (version) from the master branch of the GitHub repository and apply (if valid) the version found in the tag in the project assembly.

Finally, it have to produce a folder with the deployment package.

So, let’s start with the steps to implement this script.

Project configuration

Before to write the powershell script, I need to setup the configuration for the project deploy.

So right click on the project (in my case a web project) and select Publish, it will appear this window:

configuration1

Then we need to create a new profile and select (in my case) the folder option:

configuration2

After that, we will have the a new pubxml file in the solution:

configuration3

We will use this file in our powershell script.

Source code versioning

Now it’s the time to implement the script and the first step that I want to do is apply a version to the assembly and use it in the name of the folder where the application will be published.

My repository provider is GitHub and everytime I release a version on the master branch, I apply a tag on the commit with the release number, like 0.2.0.

So my script have to be able to get the last tag from the master branch and apply it to the assembly.

We have some options to apply the version on a .NET application, the most standard way is use the AssemblyInfo.cs file, where we could have attributes like AssemblyVersion, AssemblyFileVersion, and also AssemblyInformationalVersion.

If the first two attributes need to have a version in standard format, the last attribute leave us the freedom to use a custom versioning, for example if we want to include in the versioning the current date, or the the name of the git branch and so on.

For this reason I’ll update the AssemblyInformationalVersion.

So, first of all I need to retrieve the version from the tag applied on the GitHub repository:

$version = $(git describe --abbrev=0 --tag)

By executing this git command from the solution folder, I can retrieve the last tag applied and use it as the new version, or part of it.

Now I can check if the version has a specific format, for example I want that the version is composed by two or three numbers:


$versionRegex1 = "\d+.\d+.\d+"
$versionData1 = [regex]::matches($version,$versionRegex1)
$versionRegex2 = "\d+.\d+"
$versionData2 = [regex]::matches($version,$versionRegex2)

if ($versionData1.Count -eq 0 -and $versionData2.Count -eq 0) { Throw "Version " + $version + " has a bad format" }

If these checks are satisfied, I can apply the version and I search the AssemblyVersion.cs files in the solution:


$files = Get-ChildItem $sourceDirectory -recurse -include "*Properties*" |
?{ $_.PSIsContainer } |
foreach { Get-ChildItem -Path $_.FullName -Recurse -include AssemblyInfo.* }

If I have found the files, I can apply the new version:


if ($files) {
Write-Host "Updating version" $version
foreach ($file in $files) {
$filecontent = Get-Content($file)
attrib $file -r
$informationalVersion = [regex]::matches($filecontent,"AssemblyInformationalVersion\(""$version""\)")

if ($informationalVersion.Count -eq 0) {
Write-Host "Version " $version " applied to " $file
$filecontent -replace "AssemblyInformationalVersion\(.*\)", "AssemblyInformationalVersion(""$version"")" | Out-File $file
}

}
}

I check that the attribute has not the new version value yet, otherwise nothing needs to do.

The hardest step is completed, now I can build and deploy my project.

Build and deploy

I order to build the project I need MSBuild 15, that in my case is already installed with Visual Studio 2017.

But, if you haven’t, you can download it from the Microsoft web site and this link.

If you have some nuget packages in the project, in order to restore the packages before to build you need the nuget executable as well, and you can download it from this link.

Now we are ready to write the code to build and deploy the project:


$msbuild = "C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\15.0\Bin\msbuild.exe"
$solutionFolder = $PSScriptRoot + "..\..\"
$solutionFile = $solutionFolder + "\EFContextMock.sln"
$projectFile = $solutionFolder + "\WebApp\WebApp.csproj"
$nuget = $solutionFolder + "\nuget\nuget.exe"
$version = $(git describe --abbrev=0 --tag)
$publishUrl = "c:\temp\EFContextMock\" + $version

SetAssemblyInfo $solutionFolder $version

Write-Host "Restore packages"

& $nuget restore $solutionFile

if ($LastExitCode -ne 0){
$exitCode=$LastExitCode
Write-Error "Build failed!"
exit $exitCode
}
else{
Write-Host "Build succeeded"
}

Write-Host "Building"

& $msbuild $projectFile /p:DeployOnBuild=true /p:PublishProfile=Publish.pubxml /p:PublishUrl=$publishUrl

if ($LastExitCode -ne 0){
$exitCode=$LastExitCode
Write-Error "Build failed!"
exit $exitCode
}
else{
Write-Host "Build succeeded"
}

After some variables setup I assign the version with the code discussed above and I restore the nuget packages.

The msbuild command uses the pubxml file created in the first step; one of the parameters of the command is the PublishUrl, that in my case is a local path.

You can find the complete powershell script here.

Deploy a .NET project with powershell and git