Thursday, November 29, 2012

Hybrid CQRS implementation using MongoDb

Introduction to CQRS

Command and Query Responsibility Segregation(CQRS) is a pattern that suggests you to use different models to read and to write information.

In standard relational systems you have dilemma: use normalized database or denormalized database? In first case you get fine-designed database that is good for writing data, but has poor reading performance. In second case you get headache with your updates, but(if properly denormalized) you can get good reading performance.

So, CQRS suggests to use 2 different databases: one optimized for reading(denormalized) and another for writing(normalized). It is a very tempting way to designing systems. The only big problem with this solution is that you need somehow to sync you databases. You can create some sort of Denormalizer that would perform synchronization between your normalized and denormalized databases, after writings to your normalized database.

Is there a simpler solution?
I think there is one using NoSQL database(for example, MongoDb).

What is MongoDb?

MongoDb is a non-relational data store for JSON documents. More about MongoDb.
Big benefit of MongoDb is that it you can easily store\restore objects that you have inside your program to\from database. This help us much in our Hybrid CQRS implementation.

Hybrid CQRS implementation using MongoDb
All commands(write operations) go through Service layer where validators and other protective measures are applied. Querying(read operations) are allowed directly from Database to speed up process.

Since we use NoSQL database(MongoDb) it is not strictly necessary for us to use different databases for reading and writing. If we make proper design of our MongoDb database, it will be good for reading(due to structure of data storage) and no bad for writings(if we do not go too far with denormalization).

This CQRS implementation is applicable in cases when you want to use Domain-Driven design approach for your system Core, at the same time retaining the ability for fast data reading for your Web application.

Tuesday, November 20, 2012

Elmah: filtering "spam" exceptions

Elmah is a great logging facility for ASP.NET.

It can be easily configured in Web.config. Here an example:
 <configuration>  
  <elmah>  
   <errorLog type ="Elmah.XmlFileErrorLog, Elmah" logPath ="~/errors/" />   
   <security allowRemoteAccess ="1" />   
   <errorMail from ="noreply@server.com" to= "admin@example.com" subject ="Web Exception" async= "true "smtpPort ="0"></errorMail>   
  </elmah>  
 </configuration>  

But it's very likely that you will get a lot of "spam" error messages in your log and on your e-mail. So you'll want to filter some of those messages.

You can do it in a very flexible way as follows:
 <configuration>  
  <elmah>  
   <errorLog type ="Elmah.XmlFileErrorLog, Elmah" logPath ="~/errors/" />  
   <errorFilter>  
    <test>  
     <or>  
      <equal binding ="HttpStatusCode" value ="404" type ="Int32" />  
      <regex binding ="Exception.Message" pattern =" ^A potentially dangerous Request$" />  
      <regex binding ="Exception.Message" pattern =" ^The parameters dictionary contains a null entry$" />  
      <and>  
       <regex binding ="Exception.Message "pattern ="^The operation has timed out$" />  
       <regex binding ="Context.Request.ServerVariables['URL']" pattern =".*/some-url/$" caseInsensitive="true" />  
      </and>  
      <and>  
       <regex binding ="Exception.Message" pattern ="^Invalid image profile*" />  
       <regex binding ="Context.Request.ServerVariables['HTTP_USER_AGENT']" pattern ="bot" />  
      </and>  
     </or>  
    </test>  
   </errorFilter>  
   <security allowRemoteAccess ="1" />  
   <errorMail from ="noreply@server.com" to= "admin@example.com" subject ="Web Exception" async= "true "smtpPort ="0"></errorMail>  
  </elmah>  
 </configuration>  

As you can see, we just added <errorFilter> section with various filter conditions, separated by Boolean operations(<or>, <and>). It is a very intuitive way to filter errors that you don't want to track down with Elmah.

Sunday, November 18, 2012

MongoDb for beginners


What is MongoDb?

MongoDb is a non-relational data store for JSON documents. By non-relational we mean that it does not store data in tables like relational databases does. It stores JSON documents.


What is JSON? JSON(JavaScript Object Notation) is a standard for representing data structures in a text-based human-readable form. JSON documents look like this:

{"Name" : "Bob"} 

This very simple JSON document has the key "Name" and the value "Bob". You can create more complex JSON documents with hierarchies:

{"Name" : "Bob", "Age" : 50, "Hobbies" : ["hockey", "football", "movies"], "Account" : {"Login" : "Bobby", "Password" : "12345"}}

JSON documents are stored within MongoDb collections(collections are analogous to tables in a relational database). It’s very useful when writing programs, because this type of data structures looks a lot closer to what you have inside your programs, than relational database does:
 class Person  
 {  
   public string Name;  
   public int Age;  
   public string[] Hobbies;  
   public AccountDetails Account;  
 }  
 class AccountDetails  
 {  
   public string Login;  
   public string Password;  
 }  

That’s new, because you never do that in relational table. The fact that MongoDb stores documents makes programming your domain model very convenient.

Schemaless 

Another interesting aspect about MongoDb is that it also schemaless. Schemaless means that two documents don’t need to have a same schema.

For example, you can store this documents inside the same collection:
{"a" : 1, "b" : 2}
{"a" : 1, "b" : 2, "c" : 3}

MongoDb vs Relational databases


The idea of MongoDb is to retain most of scalability and performance, giving you a lot of functionality to work with, but not quite as much as you can get in relational database management systems. 

That’s because there are few different things are missing in MongoDb:
  • Joins. MongoDb stores documents. Each document is stored in collection. But if you want to do a JOIN between two collections - you can’t do that in MongoDb. And the reason is that JOINs scale very poorly.
  • Transactions. That sound very bad, but the truth is that you don't need transactions in MongoDb in applications where you would need them in relational systems. And the reason is that documents are hierarchical in MongoDb and you can access those documents atomically. 
Choice between non-relational and relational data stores can't be easy. But if you want to create a scalable, high-performance application you should consider MongoDb as an option.

This article is based on video lectures from 10gen education course https://education.10gen.com.

Monday, November 12, 2012

SSO in multiple ASP.NET applications under the same domain

Suppose you have two different web applications under the same domain. For example:

First web application(ASP.NET Web Forms): http://example.com/
Second web application(ASP.NET MVC): http://example.com/admin/

Both web applications use ASP.NET Forms authentication.

How to make possible to authenticate in one web-application and be authenticated in another?

The answer is to use same machineKey configuration and authentication cookie name in both web applications.

Example of first web application(ASP.NET Web Forms) config section:
 <system.web>  
  <machineKey validationKey="6366D9EDF5591718A1A69557F106AFC16A8A184159028364814BD3B9D48941832E7310C0386DAD406AD04337B4B57D1772430233FCB82E265635DE5E35FF3C4F" decryptionKey="76D34CAEA4614B2EBFB0E20819CFE744389ADCC511D94C8CEA7DA6517C9D0E68" validation="SHA1" decryption="AES" />  
  <authentication mode ="Forms" >  
    <forms name =".AUTH" loginUrl= "~/Login.aspx" />  
   </authentication>   
 </system.web>  

Example of second web application(ASP.NET MVC) config section:
 <system.web>  
  <machineKey validationKey="6366D9EDF5591718A1A69557F106AFC16A8A184159028364814BD3B9D48941832E7310C0386DAD406AD04337B4B57D1772430233FCB82E265635DE5E35FF3C4F" decryptionKey="76D34CAEA4614B2EBFB0E20819CFE744389ADCC511D94C8CEA7DA6517C9D0E68" validation="SHA1" decryption="AES" />  
  <authentication mode ="Forms" >  
    <forms name =".AUTH" loginUrl= "~/Authentication/Login " />  
   </authentication>  
 </system.web>  

You can generate unique machineKey for your applications here: http://aspnetresources.com/tools/machineKey

Friday, November 9, 2012

Adding custom WCF endpoint behavior in client code

Endpoint behaviors are used to customize the runtime behavior of WCF clients. It's very useful, for example, if we want to send some special information to WCF service with each call from client.

Suppose, we have ASP.NET application and we need to send current user identity with every call to our IPersonalService.

First of all, we need to configure our WCF client. We use Castle Windsor WCF Facility to do this. In Application_Start handler of Global.asax we need to create WindsorContainer, add WcfFacility, and register our IPersonalService with custom SecureEndpointBehavior:

 Container = new WindsorContainer();  
 Container.AddFacility<WcfFacility>();  
 var personalServiceUrl = ConfigurationManager.AppSettings["PersonalServiceUrl"];  
 Container.Register(Component.For<IPersonalService>().AsWcfClient(WcfEndpoint.BoundTo(new BasicHttpBinding()).At(personalServiceUrl).AddExtensions(new SecureEndpointBehavior())));  

To create desired endpoint behavior we need to implement actually endpoint behavior and message inspector.

Example of endpoint behavior class:
 public class SecureEndpointBehavior : BehaviorExtensionElement ,IEndpointBehavior  
   {  
     public void Validate(ServiceEndpoint endpoint)  
     {  
     }  
     public void AddBindingParameters(ServiceEndpoint endpoint, BindingParameterCollection bindingParameters)  
     {  
     }  
     public void ApplyDispatchBehavior(ServiceEndpoint endpoint, EndpointDispatcher endpointDispatcher)  
     {  
     }  
     public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime)  
     {  
       clientRuntime.MessageInspectors.Add(new SecureMessageInspector());  
     }  
     protected override object CreateBehavior()  
     {  
       return new SecureEndpointBehavior();  
     }  
     public override Type BehaviorType  
     {  
       get { return typeof(SecureEndpointBehavior); }  
     }  
   }  

Example of message inspector class:
 public class SecureMessageInspector : IClientMessageInspector  
   {          
     public object BeforeSendRequest(ref Message request, IClientChannel channel)  
     {        
       var siteUser = DependencyResolver.Current.GetService<ISiteUser>();  
       var header = new MessageHeader<Guid>(siteUser.UserId);  
       var untypedHeader = header.GetUntypedHeader("UserToken", "MyProject");  
       request.Headers.Add(untypedHeader);        
       return request;  
     }  
     [DebuggerStepThrough]  
     public void AfterReceiveReply(ref Message reply, object correlationState)  
     {  
     }  
   }  

As you can see, SecureEndpointBehavior class just adds SecureMessageInspector instance to list of run-time message inspectors. Our SecureMessageInspector is very simple: it just creates MessageHeader, puts siteUser.UserId into, and adds this header to the list of headers.

Now, on IPersonalService side you can retrieve user id under which the call was made to the service. In IPersonalService implementation, you can use the following command to retrieve user id:
 var userId = OperationContext.Current.IncomingMessageHeaders.GetHeader<Guid>("UserToken", "MyProject");  

Wednesday, November 7, 2012

Static polymorphism and dynamic keyword in C#

Static Polymorphism - is kind of polymorphism also known as function overloading(or operator overloading).

Static polymorphism means that functions can be applied to arguments of different types, different number and sequence of arguments.
The various types of parameters are specified at compile time, so the function can be bound to calls at compile time. This is called early binding.

Example of static polymorphism:
 class Program  
 {  
   class A { }   
   class B { }  
   private static void Foo(A a)   
   {   
     //something   
   }   
   private static void Foo(B b)   
   {   
     //something   
   }   
   static void Main(string[] args)  
   {   
     A a = new A();   
     Foo(a);   
     B b = new B();   
     Foo(b);   
   }  
 }  

In this example we exactly know the type of object(A or B) and corresponding Foo method that should be called.

Let's say we get object from the third-party method and we do not know exactly real type of the object(A or B). We can get this object and then cast reference to A(or to B) and call corresponding Foo method. Or we can use nifty dynamic keyword:
 public class ThirdPartyClass  
 {  
   public static IObject GetObject()  
   {  
     //something  
   }  
 }  
 class Program  
 {  
   class A { }   
   class B { }   
   private static void Foo(A a)   
   {   
     //something   
   }   
   private static void Foo(B b)   
   {   
     //something   
   }   
   static void Main(string[] args)   
   {   
     dynamic d = ThirdPartyClass.GetObject();  
     Foo(d);   
   }  
 }  

In this example, actual type of the variable d that’s declared dynamic is resolved at run time. If variable d is not of type A or B then exception will be thrown, because of trying to call method Foo with incompatible argument type.

Tuesday, November 6, 2012

Installing WCF-client using Castle Wcf Facility

Let's say we want to consume WCF-service in our ASP.NET MVC application. We try to keep clean design, and we want to inject our dependent WCF service, say, in our MVC Controller.

For example:
 public class MyController : Controller  
 {  
   private readonly IAnonymousService _anonymousService;  
   public MyController(IAnonymousService anonymousService)  
   {  
    _anonymousService = anonymousService;  
   }  
 }  

To accomplish this we need new version of Castle Windsor container. You need to create WindsorContainer, add Castle WCFWacility, and then register your service.

It may looks like this(in your Global.asax Application_Start handler):
 Container = new WindsorContainer();  
 Container.AddFacility<WcfFacility>();  
 var anonumousServiceUrl = ConfigurationManager.AppSettings["AnonymousServiceUrl"];  
 Container.Register(Component.For<IAnonymousService>().AsWcfClient(WcfEndpointBoundTo(new BasicHttpBinding()).At(anonumousServiceUrl)));  

Friday, November 2, 2012

Is enabling Double Escaping dangerous in ASP.NET?

Let's say we want to use plus sign in URL paths in our ASP.NET application.

For example: http://example.com/news/tag1+tag2/

If we try to request this URL, IIS(in fact, Request Filtering Module) will reject request with 404 error.

The only chance for me to do this was the following section in Web.config:
 <system.webServer>   
  <security>  
    <requestFiltering allowDoubleEscaping="true"/>  
  </security>  
 </system.webServer>  

With this setting everything works fine, except for the fact that we allow double escaping for all URLs in our application. So, is enabling double escaping dangerous?

Allowing double escaping points IIS to forward request to ASP.NET application, even if there is still encoded fragments in URL after first decoding.

Many would say that it's not big deal, but i decided not to risk, so i changed "+" delimiter in URL to "_" and removed section from Web.config.

So, my URLs look like:
http://example.com/news/tag1_tag2/