Managing OAuth 2 authentication with Swagger

In this post I want to talk about a product that could help us to produce documentation about the Web API services implemented in our application.

Swagger is a popular framework that once installed in an ASP.NET application is able to produce documentation about all the Web API implemented in the project.

Furthermore it give us a lot of flexibility and is possible to add some custom filters in order to change the standard behaviours; for example add the OAuth authentication management for the protected applications.

So let’s go to view the steps necessaries to install and configure this framework.

Configuration

The first step is install the package in our project and we can do that with nuget:


Install-Package Swashbuckle -Version 5.6.0

Now we need to add the Swagger configuration in the startup.cs file:


config.EnableSwagger(c =>
{
c.SingleApiVersion("v1", "BaseSPA.Web");
c.OperationFilter<SwaggerFilter>();
c.PrettyPrint();
c.IgnoreObsoleteActions();
}).EnableSwaggerUi();

In the configuration we define the description of the project, we says that we want json in prettify format and we want to ignore obsolete actions.

Furthermore we register a SwaggerFilter, that is a custom filter used to manage the OAuth authentication.

This is the SwaggerFilter implementation:


public class SwaggerFilter: IOperationFilter
 {
 public void Apply(Operation operation, SchemaRegistry schemaRegistry, ApiDescription apiDescription)
 {
 var toBeAuthorize = apiDescription.GetControllerAndActionAttributes<AuthorizeAttribute>().Any();

if (toBeAuthorize)
 {
 if (operation.parameters == null)
 operation.parameters = new List<Parameter>();

operation.parameters.Add(new Parameter()
 {
 name = "Authorization",
 @in = "header",
 description = "bearer token",
 required = true,
 type = "string"
 });
 }
 }
 }

First of all we need to implement the IOperationFilter interface with the Apply method.

In this method we check the actions protected with the Authorize attribute; for these, we add a new Authorization parameter that we’ll be showed in the Swagger UI and will be used to set the bearer token.

Test Web API

After compiling the project, we can access the url of the application and append the term swagger at the end of that, like this:


http://localhost/swagger

This is what will be showed:

swagger2

If we open one of the actions available, we notice the Authorization attribute that we have configure above:

swagger5

Now what we need is the bearer token to send to the action and we can retrieve it with Postman; we have to send a post request to the token endpoint configured in the application like this:

swagger1

I send a request with my username, password and grant_type keys and I specify the content type as x-www-form-urlencoded as well.

The response contains the access_token and I can copy it in the field in the swagger UI:

swagger3

That’s all, by sending the request, the application recognize me and it send me the response.

You can find the source code here.

 

 

 

 

 

Advertisements
Managing OAuth 2 authentication with Swagger

Consume Web API OData with ODataAngularResources

One month ago, I wrote this post about the odata services and how we can consume those with the $http factory of Angularjs.

This approach is not wrong but is not smart at all, because we have to write every query for any call that the application needs to do, and is not flexible.

What we could do is write our custom library that deal with the odata url format and it expose some fluent methods that the components of the application can call; in this way we have a layer between the angular services and the odata controller and the code should be more readable.

Anyway, write a library like that is not very easy and require a lot of amount of time.

The solution of these problems is a library already implemented that we can use directly in our angular application, that is ODataAngularResources.

This is a lightweight library that allow writing the OData queries with fluent methods.

What I’ll go to do is leverage this library for the queries and implement a custom service to manage the entities saving.

ODataResource factory

The first factory that I want to implement is an object that wrap the basic ODataResources module and expose the CRUD operations of the entity.

First of all, I need to install the library, and you can refer to the GitHub project for any detail.

Than, I can start with the implementation and creating the new module:


(function(window, angular) {
'use-strict';
angular.module('odataResourcesModule', ['ui.router', 'ODataResources'])

.factory('odataResource', function ($odataresource, $http, $q) {

function odataResource(serviceRootUrl, resourcePath, key) {
this.serviceRootUrl = serviceRootUrl;
this.resourcePath = resourcePath;
this.key = key;

this._odataResource = $odataresource(serviceRootUrl + '/' + resourcePath, {}, {}, {
odatakey: key,
isodatav4: true
});

angular.extend(this._odataResource.prototype, {
'$patch': function () {
var defer = $q.defer();

var req = {
method: 'PATCH',
url: serviceRootUrl + '/' + resourcePath + '(' + this.Id + ')',
data: this
};

$http(req).then(function (data) {
defer.resolve(data);
}, function (error) {
defer.reject(error);
});

return defer.promise;
}
});
}

.....

The factory has a constructor where I pass three parameters, the ServiceRootUrl that is the first part of the url (in my case odata), the resourcePath that identify the entity (Blogs) and the key of the table.

Then I configure the $odataresource with these parameters and I setup the odata v4 as well.

At the end I extend the OdataResource prototype because this library doesn’t provide the implementation for the PATCH verb, I need to implement it with my custom function.

So far so good, now I want to expose some methods to queries the resource or retrieve a single entity:


odataResource.prototype.getResource = function () {
return this._odataResource.odata();
}

odataResource.prototype.get = function (id) {
var defer = $q.defer();

this._odataResource.odata().get(id, function (data) {
data._originalResource = angular.copy(data);
defer.resolve(data);
}, function (error) {
defer.reject(error);
});

return defer.promise;
}

With the getResource method I implement a getter for the resource and allow the caller to execute the queries.

The get method deserves a particular explanation, once the single entity is retrieved, I make copy of the original entity and assign that to a new private property named _originalResource.

This will allow me two check the values that will be changed on the entity, and compose the object that will be sent with the patch method.

Now I can implement the others methods to add, update and delete the entity:


odataResource.prototype.new = function () {
return new this._odataResource();
}

odataResource.prototype.add = function (resource) {
var defer = $q.defer();

resource.$save(function (data) {
data._originalResource = angular.copy(data);
defer.resolve(data);
}, function (error) {
defer.reject(error);
});

return defer.promise;
}

odataResource.prototype.update = function (resource) {
var defer = $q.defer();

var self = this;
resource.$patch().then(function () {
self.get(resource[self.key]).then(function (data) {
defer.resolve(data);
});
}, function (error) {
defer.reject(error);
});

return defer.promise;
}

odataResource.prototype.delete = function (resource) {
var defer = $q.defer();

resource.$delete(function (data) {
defer.resolve(data);
}, function (error) {
defer.reject(error);
});

return defer.promise;
}

Like the get method, I do a copy of the entity in the add method as well.

Now I have a factory that is able to query Web API OData services and manage the crud operations and I can proceed with some business logic.

ODataGenericResource factory

The first step is define the new factory:


.factory('odataGenericResource', function ($q, odataResource) {

function odataGenericResource(serviceRootUrl, resourcePath, key) {
this.odataResource = new odataResource(serviceRootUrl, resourcePath, key);
}

In the constructor I create a new instance of the ODataResource factory, in order to leverage the methods implemented above.

Based on the id parameter, now I implement a get method that will create a new resource or perform a get from a Web API OData service:


odataGenericResource.prototype.get = function (id) {
if (id === '') {
var defer = $q.defer();
defer.resolve(this.odataResource.new());

return defer.promise;
} else {
return this.odataResource.get(id);
}
}

If an external service will call this method with an empty id will receive a new resource, otherwise will receive the entity, if exists.

Now I need to implement the save method:


odataGenericResource.prototype.isChanged = function (resource) {
var isChanged = false;
for (var propertyName in resource) {
if (isEntityProperty(propertyName) && resource._originalResource[propertyName] !== resource[propertyName]) {
isChanged = true;
}
}

return isChanged;
}

odataGenericResource.prototype.getObjectToUpdate = function (resource) {
var object = this.odataResource.new();
object[this.odataResource.key] = resource[this.odataResource.key];

for (var propertyName in resource) {
if (isEntityProperty(propertyName) && resource._originalResource[propertyName] !== resource[propertyName]) {
object[propertyName] = resource[propertyName];
}
}

return object;
}

odataGenericResource.prototype.save = function(resource) {
if (!resource._originalResource) {
return this.odataResource.add(resource);
} else if (this.isChanged(resource)) {
var object = this.getObjectToUpdate(resource);
return this.odataResource.update(object);
} else {
var defer = $q.defer();
defer.resolve(resource);

return defer.promise;
}
}

I need some methods, to check the state of the entity.

The IsChanged method check if any property of the entity is changed, leveraging the _originalResource property.

The second method prepare the effective object to save, with only the properties that has been changed.

The save method check if the entity is new (it doesn’t have the _originalResource property) or not; based on that, it will be added or saved.

The delete method doesn’t have particular stuff and we can now take a look at the using of this factory in the application.

Application

I can use this factory in any angular module that needs to manage crud operations for an entity.

For example, in a blogsModule, I can define a factory like this:


(function (window, angular) {
'use-strict';
angular.module('blogsModule', ['ui.router', 'odataResourcesModule'])
.factory('blogsService', function ($http, odataGenericResource) {
return new odataGenericResource('odata', 'Blogs', 'Id');
})

And now in the controller:


.controller('blogsCtrl', function ($scope, $state, blogsService) {

$scope.new = function() {
$state.go("home.blog", { id: null });
};

$scope.detail = function(id) {
$state.go("home.blog", { id: id });
};

$scope.Blogs = blogsService.getOdataResource().query();
})
.controller('blogsDetailCtrl', function ($scope, $state, blogsService) {
var load = function (id) {
blogsService.get(id).then(function(data) {
$scope.Blog = data;
});
};

$scope.save = function () {
blogsService.save($scope.Blog).then(function(data) {
load(data.Id);
});
}

$scope.delete = function () {
blogsService.delete($scope.Blog).then(function () {
$scope.close();
});
};

$scope.close = function () {
$state.go("home.blogs");
};

load($state.params.id);
});

With few lines of code I can manage the calls to a Web API OData service, with the possibility do fluent odata queries.

You can find the source code here.

Consume Web API OData with ODataAngularResources

Continuous integration with Atlassian Bamboo

The continuous integration is a frequent theme in the application development, expecially in the projects with many developers involved that work on differents components of the application.

Just because the developers works on different features, a process that integrate the new code frequently in project and verify that all the dependencies of that are working yet is recommended.

A practice to integrate daily the new code in the shared project repository avoid problems like integration hell, where a developer has troubles to integrate code of another developer, that worked isolated for a long period.

Again, a process that manage this integration allow us to introduce in this phase the automatic tests and discover eventually regressions in the code modified.

There are different products to manage the continuous integration process and one of those is Bamboo, a product owned by Atlassian and in this post I want to talk about the configuration of the continuous integration on a .NET project with a bunch of tests implemented with NUnit.

Server configuration

Before to proceed with a project configuration we need to take care about the executables that Bamboo will needs to execute the process.

In my case I need the MSBuild and NUnit executables and I can configure them in the server capabilities section of the Bamboo configuration:

configuration7

In my case I use the MSBuild installed with Visual Studio, but you can download that from this url.

The NUnit console is the executable used for the tests execution and you can find it here.

Plan

When I need to configure a new project in Bamboo, I need to do a new plan.

A plan is composed by one or more stages; for example in my case I have a build stage, a test stage and a deploy stage.

In the stages I will create the jobs that will do some stuff.

In the plan configuration I need to take care about the referenced repository, that is the repository where Bamboo will load the source code:

configuration5

In my case I have specified the url of a GitHub repository.

Another thing that I can do is define a trigger for the plan, that is an event after which the plan execution will be fired.

I define a branch that Bamboo will polling for new changes:

configuration6

Every time a new commit will be pushed on the master branch, the plan will be executed.

Build

Now I configure the Solution Build job in the Build stage; every job is composed by one or more tasks and in this case I have three tasks.

The first one is the Source Code Checkout, that is the phase where Bamboo get the source code from the repository and copy all the content in his working directory (locally on the server where is installed).

The second one is a Script configuration, in my case I need to restore some nuget packages before to build the project; I do that by writing a script that launch the nuget executable with the option restore:

configuration8

You can find the nuget executable download here.

The last step is the MSBuild task and we use the executable configured above:

configuration9

Another thing that we can do in the job configuration is define an artifact, thus the output of the job will be available for the next jobs:

configuration10

In this way, we speedup the next job because it won’t need to checkout the repository again.

Test

The test job is more simple, because it has the source code already available and it has only the test runner:

configuration11

Again here we use the other executable configured above and we have only to specify the dll of the test project.

We don’t forget to configure the artifact for the job and use that produced in the previous job:

configuration12

Deploy

The last stage is the deploy.

To have a clean situation I do again a Source Code Checkout and than I  execute a powershell script:

configuration13

The script was explained in the previous post and deal with some stuff like build, apply a version to the assembly and copy the result in a shared directory.

That’s all, when you’ll run the plan, the result (if the project is ok :))) will looks like this:

configuration4

 

Continuous integration with Atlassian Bamboo

Deploy a .NET project with powershell and git

In this topic I want to share my experience in writing a powershell script to compile and publish a .NET project.

I use Git and a GitHub repository, so in my mind the script should restore the nuget packages, build the project with MSBuild, take the last tag (version) from the master branch of the GitHub repository and apply (if valid) the version found in the tag in the project assembly.

Finally, it have to produce a folder with the deployment package.

So, let’s start with the steps to implement this script.

Project configuration

Before to write the powershell script, I need to setup the configuration for the project deploy.

So right click on the project (in my case a web project) and select Publish, it will appear this window:

configuration1

Then we need to create a new profile and select (in my case) the folder option:

configuration2

After that, we will have the a new pubxml file in the solution:

configuration3

We will use this file in our powershell script.

Source code versioning

Now it’s the time to implement the script and the first step that I want to do is apply a version to the assembly and use it in the name of the folder where the application will be published.

My repository provider is GitHub and everytime I release a version on the master branch, I apply a tag on the commit with the release number, like 0.2.0.

So my script have to be able to get the last tag from the master branch and apply it to the assembly.

We have some options to apply the version on a .NET application, the most standard way is use the AssemblyInfo.cs file, where we could have attributes like AssemblyVersion, AssemblyFileVersion, and also AssemblyInformationalVersion.

If the first two attributes need to have a version in standard format, the last attribute leave us the freedom to use a custom versioning, for example if we want to include in the versioning the current date, or the the name of the git branch and so on.

For this reason I’ll update the AssemblyInformationalVersion.

So, first of all I need to retrieve the version from the tag applied on the GitHub repository:

$version = $(git describe --abbrev=0 --tag)

By executing this git command from the solution folder, I can retrieve the last tag applied and use it as the new version, or part of it.

Now I can check if the version has a specific format, for example I want that the version is composed by two or three numbers:


$versionRegex1 = "\d+.\d+.\d+"
$versionData1 = [regex]::matches($version,$versionRegex1)
$versionRegex2 = "\d+.\d+"
$versionData2 = [regex]::matches($version,$versionRegex2)

if ($versionData1.Count -eq 0 -and $versionData2.Count -eq 0) { Throw "Version " + $version + " has a bad format" }

If these checks are satisfied, I can apply the version and I search the AssemblyVersion.cs files in the solution:


$files = Get-ChildItem $sourceDirectory -recurse -include "*Properties*" |
?{ $_.PSIsContainer } |
foreach { Get-ChildItem -Path $_.FullName -Recurse -include AssemblyInfo.* }

If I have found the files, I can apply the new version:


if ($files) {
Write-Host "Updating version" $version
foreach ($file in $files) {
$filecontent = Get-Content($file)
attrib $file -r
$informationalVersion = [regex]::matches($filecontent,"AssemblyInformationalVersion\(""$version""\)")

if ($informationalVersion.Count -eq 0) {
Write-Host "Version " $version " applied to " $file
$filecontent -replace "AssemblyInformationalVersion\(.*\)", "AssemblyInformationalVersion(""$version"")" | Out-File $file
}

}
}

I check that the attribute has not the new version value yet, otherwise nothing needs to do.

The hardest step is completed, now I can build and deploy my project.

Build and deploy

I order to build the project I need MSBuild 15, that in my case is already installed with Visual Studio 2017.

But, if you haven’t, you can download it from the Microsoft web site and this link.

If you have some nuget packages in the project, in order to restore the packages before to build you need the nuget executable as well, and you can download it from this link.

Now we are ready to write the code to build and deploy the project:


$msbuild = "C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\15.0\Bin\msbuild.exe"
$solutionFolder = $PSScriptRoot + "..\..\"
$solutionFile = $solutionFolder + "\EFContextMock.sln"
$projectFile = $solutionFolder + "\WebApp\WebApp.csproj"
$nuget = $solutionFolder + "\nuget\nuget.exe"
$version = $(git describe --abbrev=0 --tag)
$publishUrl = "c:\temp\EFContextMock\" + $version

SetAssemblyInfo $solutionFolder $version

Write-Host "Restore packages"

& $nuget restore $solutionFile

if ($LastExitCode -ne 0){
$exitCode=$LastExitCode
Write-Error "Build failed!"
exit $exitCode
}
else{
Write-Host "Build succeeded"
}

Write-Host "Building"

& $msbuild $projectFile /p:DeployOnBuild=true /p:PublishProfile=Publish.pubxml /p:PublishUrl=$publishUrl

if ($LastExitCode -ne 0){
$exitCode=$LastExitCode
Write-Error "Build failed!"
exit $exitCode
}
else{
Write-Host "Build succeeded"
}

After some variables setup I assign the version with the code discussed above and I restore the nuget packages.

The msbuild command uses the pubxml file created in the first step; one of the parameters of the command is the PublishUrl, that in my case is a local path.

You can find the complete powershell script here.

Deploy a .NET project with powershell and git

Mocking Entity Framework DbContext with Moq

When we have to test methods that involves Entity Framework, a typical choice that we have to face is use integration tests, with an effective database, or unit tests.

If we choice the first option, with a database like SQL LocalDB, we’ll have performance problems because the cost of the database creation and the data inserts in the test startup is very high, and in order to guarantee the initial conditions we’ll have to do it for each test.

What we can do is use a mock framework that help us to mockup the entity framework context; it would be an in-memory db context, like the in-memory db context of .NET Core, that we have seen in this post.

The factory

In pratice, mocking a class means substitute the real implementation of a method with our custom behaviour; what we can do for every method of the class is setup returns values of the method; therefore we don’t need the real implementation of the class, we have mocked the methods.

In our case, we can setup the EF DbSet with an in-memory list and the methods that use the context will no longer need the real database, they will use our lists provided with the mock.

Anyway, before to do it we have another problem; how we can provide our in-memory db context to the methods that we need to test?

In the real world, it’s likely that the context will be instantiated with the using statement in every method, like this:


public async Task<List<Person>> GetPersons(string query)
{
using (var db = new Context())
{
.....
}
}

This is a problem, because we are not able to inject our mocked context in the class.

But we can solve it with a Factory service, a singleton service that returns instances of the db context:


public class ContextFactory
{
private Type _dbContextType;
private DbContext _dbContext;

public void Register<TDbContext>(TDbContext dbContext) where TDbContext : DbContext, new()
{
_dbContextType = typeof(TDbContext);
_dbContext = dbContext;
}

public TDbContext Get<TDbContext>() where TDbContext : DbContext, new()
{
if (_dbContext == null || _dbContextType != typeof(TDbContext))
{
return new TDbContext();
}

return (TDbContext)_dbContext;
}
}

We have two methods, with the Register method we can setup a specific db context implementation; with the Get method we can get an instance of a db context, that is the registered implementation if we have one, otherwise the default implementation.

We can now inject this service as a dependency and use it:


public class PersonService
{
private readonly ContextFactory _contextFactory;

public PersonService(ContextFactory contextFactory)
{
_contextFactory = contextFactory;
}

public async Task<List<Person>> GetPersons(string query)
{
using (var db = _contextFactory.Get<Context>())
{
return await db.Persons.Where(p => p.TaxCode.Contains(query) || p.Firstname.Contains(query) || p.Surname.Contains(query)).ToListAsync();
}
}
}

Now we are ready to mock the EF context.

The mock

The framework that I use for this purphose is moq and I can install it with nuget:

install-package moq

It’s likely that you use async methods of entity framework; if yes, in order to mock we need to create an in-memory DbAsyncQueryProvider, and you can find the implementation here.

The Unit Testing used for this example is NUnit and I can configure the mocked context in the setup method; the first step is prepare a list of queryable objects:


[SetUp]
public void Setup()
{
var persons = new List<Person>() {
new Person() { TaxCode = "taxcode1", Firstname = "firstname1", Surname = "surname1" },
new Person() { TaxCode = "taxcode2", Firstname = "firstname2", Surname = "surname2" }
};
var queryable = persons.AsQueryable();
}

Now I’m ready to setup the mock:


MockSet = new Mock<DbSet<Person>>();

MockSet.As<IQueryable<Person>>().Setup(m => m.Expression).Returns(queryable.Expression);
MockSet.As<IQueryable<Person>>().Setup(m => m.ElementType).Returns(queryable.ElementType);
MockSet.As<IQueryable<Person>>().Setup(m => m.GetEnumerator()).Returns(queryable.GetEnumerator);

MockSet.As<IQueryable<Person>>().Setup(m => m.Provider).Returns(new AsyncQueryProvider<Person>(queryable.Provider));
MockSet.As<IDbAsyncEnumerable<Person>>().Setup(m => m.GetAsyncEnumerator()).Returns(new AsyncEnumerator<Person>(queryable.GetEnumerator()));

In order to mock an IQueryable, I have to setup returns values for Expression method, ElementType and GetEnumerator; every time these methods will be invoked in the queries executions, the values that I setup in the Returns expression will be returned.

I need to do the same operations for Provider method and GetAsyncEnumerator, but, since async methods are involved, I need to use the custom classes AsyncQueryProvider and AsyncEnumerator of the in-memory DbAsyncQueryProvider.

The mock for the Add and Remove operations are simplier:


MockSet.Setup(m => m.Add(It.IsAny<Person>())).Callback((Person person) => persons.Add(person));
MockSet.Setup(m => m.Remove(It.IsAny<Person>())).Callback((Person person) => persons.Remove(person));

Since Add and Remove methods returns nothing, we use Callback methods instead of Returns.

The last step is setup the factory service context with the mocked version:


MockContext = new Mock<Context>();
MockContext.Setup(m => m.Persons).Returns(MockSet.Object);

var contextFactory = new ContextFactory();
contextFactory.Register(MockContext.Object);
PersonService = new PersonService(contextFactory);

First of all I setup the DbSet of the mocked context with the mocked DbSet.

Then I register the mocked context in the factory service and then I pass the factory service as a dependency of the service to be test.

With these lines of code I have mocked the entity framework context with an in-memory instance and leveraging the context factory I was able to inject the mocked context to the service.

You can find the source code here.

 

Mocking Entity Framework DbContext with Moq

Consume Web API with an Angularjs service

This topic concerns a problem that I faced in these days, that was the building of an Angularjs service to consume Web API’s, witch in my case were implemented with OData protocol.

The start point are some Web API’s, one for every entity of the project, that they responds to a specific url, for example:

I wouldn’t want to implement a service for every single Web API that do the same operations, so the idea is implement a generic javascript class with the basic methods and a factory that generates instances of this class; every module will have an own instance with some custom options, such as the service url.

So what I want to do are two things, a generic class that implements commons methods for CRUD operations and then a factory that generates instances of this service.

Generic service

Angularjs has several provider that we can use to instantiate services; the provider is the lower level and is use usually to configure reusable services that can be used in different applications.

Factory and services are syntactic sugar on top of a provider, and allow us to define singleton services in our application; the last two are constant and value that they provides values.

One of the main debates on Angularjs is when we use a factory or a service.

The main difference from these is that in the first one we needs to return an instance of the service in this way:


(function (window, angular) {
'use-strict'
angular.module('odataModule', [])
.factory('odataService', [function() {
returns {
.....
}
}]);
})(window, window.angular)

In the second one, we don’t need to do this, the angular service deal with object instantiation:


(function (window, angular) {
'use-strict'
angular.module('odataModule', [])
.service('odataService', [function() {
.....
}]);
})(window, window.angular)

What I want to implement is a javacript class that will have these properties/methods:

  • A url property, the Web API reference
  • An entityClass property, is the the javascript object that represents the entity to be managed
  • A isNew property, that allow the service to understand that the object need to be added (post) or saved (patch)
  • A getEntities method that retrieves the list of the entities
  • A createEntity method that returns a instance of a new entity
  • A getEntity method to retrieves a single entity
  • A saveEntity method that deal with the post/patch of the entity
  • And finally a method deleteEntity

An angular service help in this work by letting me write cleaner code that factory; a concrete implementation is:


(function (window, angular) {
'use-strict'
angular.module('odataModule', [])
.constant('blogsUrl', '/odata/Blogs')
.constant('postsUrl', '/odata/Posts')
.service('odataService', ['$http', function($http) {
function Instance(url, entityObject) {
this.isNew = false;
this.url = url;
this.entityClass = entityObject;
}

Instance.prototype.getEntities = function () {
return $http.get(this.url);
}

Instance.prototype.createEntity = function() {
this.isNew = true;
return angular.copy(this.entityClass);
}

Instance.prototype.getEntity = function (id) {
return $http.get(this.url + '(guid\'' + id + '\')');
}

Instance.prototype.saveEntity = function (entity) {
if (this.isNew) {
this.isNew = false;
return $http.post(this.url, entity);
}
else
return $http.patch(this.url + '(guid\'' + entity.Id + '\')', entity);
}

Instance.prototype.deleteEntity = function (entity) {
return $http.delete(this.url + '(guid\'' + entity.Id + '\')');
}
this.GetInstance = function(url, entityObject) {
return new Instance(url, entityObject);
};
}]);
})(window, window.angular)

The Instance class is the implementation of the common methods that we needed.

The last row is the main difference from a factory; in a service, the container function is a javascript function with a constructor and the service returns an instance of it, so I can define a method GetInstance that will returns an instance of my custom class.

If I had used a factory, the last rows would be the following:


return {
GetInstance: function(url, entityObject) {
return new Instance(url, entityObject);
}

Much better the implementation with the service.

Factories of the entities

Now I need to define a specific factory for every entity:


(function (window, angular) {
'use-scrict';
angular.module('blogsModule', ['ui.router', 'odataModule'])
.factory('blogsFactory', ['odataService', 'blogsUrl', function (odataService, blogsUrl) {
var blogEntity = { Name: '', Url: '' };
return odataService.GetInstance(blogsUrl, blogEntity);
}])
})(window, window.angular)

So far so good, I have an instance of the service with specific parameters, and now I can use the factory for crud operations in my controller:


(function (window, angular) {
'use-scrict';
angular.module('blogsModule', ['ui.router', 'odataModule'])
....
.controller('blogsController', ['$scope', '$state', 'blogsFactory', function ($scope, $state, blogsFactory) {
$scope.Title = 'Blogs';

blogsFactory.getEntities().then(function (result) {
$scope.blogs = result.data.value;
});

$scope.create = function() {
$state.go('main.blog', { id: null });
};

$scope.edit = function (blog) {
$state.go('main.blog', { id: blog.Id });
};
}])
.controller('blogController', ['$scope', '$state', '$stateParams', 'odataService', function ($scope, $state, $stateParams, odataService) {
$scope.Title = 'Blog';
var service = new odataService.GetInstance('/odata/Blogs', { Name: '', Url: '' });

$scope.save = function () {
service.saveEntity($scope.blog).then(function () {
$state.go('main.blogs');
});
};

$scope.delete = function() {
service.deleteEntity($scope.blog).then(function () {
$state.go('main.blogs');
});
};

$scope.close = function() {
$state.go('main.blogs');
};

if ($stateParams.id === '')
$scope.blog = service.createEntity();
else {
odataService.getEntity($stateParams.id).then(function (result) {
$scope.blog = result.data;
});
}
}]);
})(window, window.angular)

With few lines of code I have managed the crud operations.

In my case the odata module can be improved with checks about the entity validations, by using the $metadata informations returned by the odata services and I can retrieve also the entity fields without passing those as parameters, but, anyway, this is a good starting point.

 

 

 

 

Consume Web API with an Angularjs service

Web API in ASP.NET Core

In this post I’ll talk about the Web API in ASP.NET Core and how we can implement a Restful service with this new framework.

One of the main difference in ASP.NET Core is that Web API have an unified class with the classic MVC Controllers.

So the declaration of a Web API in ASP.NET Core is look like this:


public class PostController : Controller
{
}

The attribute of the actions verbs is pretty similar to MVC 5.

We can define the controller route as well and we can do that with the Route attribute:


[Route("api/[controller]")]
public class PostController : Controller
{
}

We can implement an action in the same way as the Web API 2:


[HttpGet("{offset}", Name = "GetPost")]
 public IActionResult Get(int offset)
 {
 var result = _postService.GetPosts(offset, true);

return Ok(result);
 }

With the HttpGet attribute we can specify the verb of the method, the parameters and the action url.

So far so good, now there is the last question about this topic, that is the content negotiation.

Since ASP.NET Core MVC and Web API frameworks are unified, with the attribute Produces we can define the response content type for the controllers.

So, if we want to return a content type in json format, we need to apply this attribute:


[Produces("application/json")]
[Route("api/[controller]")]
public class PostController : Controller
{
}

With this attribute the content type of the request will be application/json, otherwise the controller will returns a response 415 – Unsupported media type.

The source code of the topic is here.

 

Web API in ASP.NET Core