Wednesday, December 5, 2012

Pittsburgh .NET Lightning Talks

I'll be speaking at the Pittsburgh .NET Users' Group Lightning Talks next Tuesday doing a 15 minute intro to AngularJS. I which it was a bit longer, there is a lot of cool stuff in Angular I won't get a chance to go over. Then again, given this is my first ever speaking engagement, maybe it's good it's only 15 minutes.

I'm really looking forward to the other talks as well, there are some pretty smart folks that are going to be speaking there. I hope I can keep up.

For more information on the Pittsburgh .NET User's Group, check them out on Meetup.

Monday, December 3, 2012

Angular JS: Custom Validation via Directives

Okay, for this bit on Angular, I'm going to write up a quick bit on custom validation.  I've gone over form validation before, but I think that there are still plenty of cases that Angular's default validation just doesn't cover. A lot of people's first instinct is to resort to calling controller functions to do their validation. You could do that, but that would break the really slick validation model Angular already has in place.

What you really want to do is build a directive that requires ngModel. Requiring ngModel will pass the ngModelController into the linking function as the fourth argument. The ngModel controller has a lot of handy functions on it, in particular there is $setValidity, which allows you to set the state of a model field has $valid or $invalid as well as set an $error flag.

Here is an example of a simple regex validation with a custom validation directive in JSFiddle:




Here is the code for the custom validation directive

app.directive('regexValidate', function() {
    return {
        // restrict to an attribute type.
        restrict: 'A',
        
        // element must have ng-model attribute.
        require: 'ngModel',
        
        // scope = the parent scope
        // elem = the element the directive is on
        // attr = a dictionary of attributes on the element
        // ctrl = the controller for ngModel.
        link: function(scope, elem, attr, ctrl) {
            
            //get the regex flags from the regex-validate-flags="" attribute (optional)
            var flags = attr.regexValidateFlags || '';
            
            // create the regex obj.
            var regex = new RegExp(attr.regexValidate, flags);            
                        
            // add a parser that will process each time the value is 
            // parsed into the model when the user updates it.
            ctrl.$parsers.unshift(function(value) {
                // test and set the validity after update.
                var valid = regex.test(value);
                ctrl.$setValidity('regexValidate', valid);
                
                // if it's valid, return the value to the model, 
                // otherwise return undefined.
                return valid ? value : undefined;
            });
            
            // add a formatter that will process each time the value 
            // is updated on the DOM element.
            ctrl.$formatters.unshift(function(value) {
                // validate.
                ctrl.$setValidity('regexValidate', regex.test(value));
                
                // return the value or nothing will be written to the DOM.
                return value;
            });
        }
    };
});


... and here is an implementation of our custom validation directive in markup:

<div ng-app="myApp">
    <div ng-controller="MainCtrl">
        <form name="myForm" ng-submit="doSomething()">
            <label>Must contain the word <strong>blah</strong> (case insensitive)
                <br/>
                <!-- set up the input, make sure it:
                    1. has ng-model
                    2. has a name="" so we can reference it in the model.
                    3. has regex-validate
                    4. (optional) has regex-validate-flags -->
                <input type="text" placeholder="Enter text here"
                ng-model="test" name="test" regex-validate="\bblah\b"
                regex-validate-flags="i"/>
            </label>
            <!-- set up some sort of output for validation
                    the format here is:  [formName].[fieldName].$error.[validationName]
                    where validation name is determined by 
                    ctrl.$setValidity(validationName, true/false) in your
                    custom directive.
            -->
            <span style="color:red" ng-show="myForm.test.$error.regexValidate">WRONG!</span>
        <div>
            
        <!-- for added measure, disable the submit if the form is $invalid -->
        <button type="submit" ng-disabled="myForm.$invalid">Submit</button>
        </form>
    </div>
</div>


After all is said and done, we have a reusable validation that we can now seamlessly wire up with no additional function calls that integrates into Angular's already awesome validation.  Within the custom directive itself, there are thousands of ways to skin the cat I skinned above, the possibilities are really up to the author. As long as we stick with something maintainable and testable, I think any solution anyone comes up with is perfect. I'm sure there are probably a few good Github repositories out there for angular validation directives, but it's quick and easy to make your own too. Have fun.

EDIT: I've written an entry on validation inside an ng-repeat: Validating Form Elements in a Repeat

Monday, November 19, 2012

Angular JS: Form Validation

Basic form validation is already done for you in Angular JS... if you just know how to use it. Form validation in Angular is all done by directives, which means you just wire it up in markup, and you don't need to write a "validation function" in your controller to handle any of that dirty work.

Angular has wired up most form elements themselves to be directives, so <input type="email"/> will validate an email address and so on. Angular also looks for attributes like required  and handles them appropriately.

Let's have a look at some basic form validation in Angular:

The form

<form name="mainForm" ng-submit="sendForm()">
    <div>
      <label for="firstName">First Name</label>
      <input id="firstName" name="firstName" type="text" ng-model="person.firstName" required/>
      <span class="error" ng-show="mainForm.firstName.$error.required">required</span>
    </div>
    <div>
      <label for="lastName">Last Name</label>
      <input id="lastName" name="lastName" type="text" ng-model="person.lastName" required/>
      <span class="error" ng-show="mainForm.lastName.$error.required">required</span>
    </div>
    <div>
      <label for="email">Email</label>
      <input id="email" name="email" type="email" ng-model="person.email" required/>
      <span class="error" ng-show="mainForm.email.$error.required">required</span>
      <span class="error" ng-show="mainForm.email.$error.email">invalid email</span>
    </div>
    <div>
      <input type="checkbox" ng-model="agreedToTerms" 
        name="agreedToTerms" id="agreedToTerms" required/>
      <label for="agreedToTerms">I agree to the terms</label>
      <span class="error" ng-show="mainForm.agreedToTerms.$error.required">You must agree to the terms</span>
    </div>
    <div>
      <button type="submit">Send Form</button>
    </div>
  </form>


Here's a Plunker of the code at work:
NOTE: Plunker does not like Safari or IE, if you're seeing odd behavior in those browsers, there you go.



As you can see above, the validation information changes dynamically as you enter data, thanks to directives.

A few things to know about form validation in Angular:



  • ng-submit cannot be called until the entire form is $valid.
  • All validation and form data is actually stored in the scope under $scope['myFormName']. This is why you can write it out with Angular binding.
  • Access to validation information can be found via: $scope.nameOfForm.nameOfField. This means adding a name="" attribute to your inputs is very important.
  • Angular will automatically add CSS classes: ng-valid, ng-invalid, ng-dirty and ng-pristine to DOM elements to reflect their state.
There is also a lot of important information to be found in the documentation for the input directive.


EDIT: Additional information can be found in these blog entries:


Thursday, November 15, 2012

ALE Event Loop Framework - Added Promises

Did a little work tonight on ALE, my event loop framework for .NET. Decided to add a Promise implementation that leveraged the event loop. I fiddled with the usage a bit, so hopefully it's to people's liking.  I'm looking for any and all feedback on this addition.

For those of you that aren't familiar with promises, I did a blog entry a while ago about promises in JavaScript. This is pretty much the same idea. While in .NET there are WaitHandles and Tasks for synchronizing multiple asynchronous calls, I wanted something that would leverage the ALE's event loop engine that would do the same thing. So I decided to write a simple promise implementation.

Basic usage looks like this:

//create a single promise.
Promise.To((defer) => 
{
    try {
       //Do something here, and return a value
       defer.Resolve("complete");
    }catch{
       defer.Reject("this has been rejected.");
    }
}).Then((data) => 
{
    //When the above is complete, write the return value
    Console.WriteLine(data);
});

More advanced usage would be like so:

//a method to create a promise and run it.
public Promise Foo(string bar) {
    return Promise.To((defer) => 
    {
        Print(bar);
        defer.Resolve(bar);
    });
}

//using When to await the completion of multiple Promises.
Promise.When(Foo("One"), Foo("Two"), Foo("Three"))
   .Then((data) =>
    {
        Print("Complete");
    });

The above code would be used in situations where you might need to wait for multiple asynchronous calls before continuing. Using ALE's Promise class, you'll be able to do so without blocking a thread.

I'm really looking for any feedback/code-review anyone wants to do. So if you have the time please look it over.

Upcoming for ALE, I'm going to try to make most of the Asynchronous calls return a Promise, to enable more complex chaining of async calls. Also, possibly a method to re-execute a Promise.

Wednesday, November 7, 2012

Angular JS - Directive Basics

A while ago I posted some very basic information about AngularJS. There are a lot of really cool things to go over in Angular, but I think the most important thing to go over is probably directives. Directives are what tie everything together.


Where the rubber meets the road


Directives in angular are easily the most powerful and complicated peace to the puzzle. Directives are used to set up DOM manipulations, interactions between the DOM and the scope, and a great many other things. Examples of directives are all over the Angular core framework. ng-model, ng-clickng-repeat, ng-app are all examples of directives. Even the select, textarea and input tags have been extended as a directive. Directives can be used to set up JQuery plugins, do validation, create custom reusable controls.


Directives come in many different flavors


  • Elements - such as <my-directive>expression here</my-directive>
  • Attributes - such as <div my-directive="expression here"></div>
  • Classes - such as <div class="my-directive: expression here;"></div>
  • Comments - such as <!-- directive: my-directive expression here -->
All of the above examples could even be the exact same directives used differently.

The anatomy of a directive


Warning: the following example is contrived, and really silly. But I'm trying to illustrate the most commonly used pieces of a directive declaration.

var app = angular.module('plunker', []);

app.controller('MainCtrl', function($scope) {
  $scope.name = 'World';
});

//the following will declare a new directive that
// may be used like <my-directive name="foo"></my-directive>
// where foo is a property on a controller's scope.
app.directive('myDirective', function(){
  // The above name 'myDirective' will be parsed out as 'my-directive'
  // for in-markup uses.
  return {
    // restrict to an element (A = attribute, C = class, M = comment)
    // or any combination like 'EACM' or 'EC'
    restrict: 'E',
    scope: {
      name: '=name' // set the name on the directive's scope
                    // to the name attribute on the directive element.
    },
    //the template for the directive.
    template: '<div>Hello, {{name}} <button ng-click="reverseName()">Reverse</button></div>',
    //the controller for the directive
    controller: function($scope) {
      $scope.reverseName = function(){
        $scope.name = $scope.name.split('').reverse().join('');
      };
    },
    replace: true, //replace the directive element with the output of the template.
    //the link method does the work of setting the directive
    // up, things like bindings, jquery calls, etc are done in here
    link: function(scope, elem, attr) {
      // scope is the directive's scope,
      // elem is a jquery lite (or jquery full) object for the directive root element.
      // attr is a dictionary of attributes on the directive element.
      elem.bind('dblclick', function() {
        scope.name += '!';
        scope.$apply();
      });
    }
  };
});

The above directive is a crude example. It will output a "Hello, World" statement with a button to reverse the name with just the following markup: <my-directive name="name"></my-directive>, presuming the parent scope has a property name equal to "World". It will also set up a double-click event that will tack an exclamation point on the end of the name.

And here's my absurd directive in action:

 


Fears of "Custom HTML tags" are unfounded


Have no fear. Angular is not destroying your perfect markup. Angular is using them as placeholders, nothing more. The HTML spec itself even says that custom tags should be ignored. If you're using "replace: true" in your directives, it's all replaced by whatever HTML you put in the template anyhow. This is a common complaint I've heard about Angular, and it's just a bad reason not to at least try Angular. It's an incredibly fun and powerful tool.


Directive Tips & Gotchas


  • Use the existing directives to do your event binding if possible. Don't bind events with JQuery anymore, just stop it. Also, DO NOT do what I did in my example and create your own simple binding like "dblclick", there is already a directive
  • You can nest directives. A directive's template may contain other custom directives.
  • Put them in their own module. Generally, it's a good idea to organize your directives into their own module. This promotes reuse in other modules as well as a separation of concerns.
  • I've witnessed self-closing directive tags not function properly. Always use both the open and close tags for your element directives.

For more comprehensive information about directives, have a look here.

Tuesday, October 23, 2012

Angular JS - The Very Basics

As I'm all about Angular JS these days, and the four friends I have that read this blog have told me to do a blog entry about it, I figured I would finally write a blog entry about the basics of Angular JS.

What is Angular?


It's a JavaScript framework from Google that encompasses:
  • Two way binding
  • Dependency injection
  • Built-in handling of RESTful services
  • Handling of AJAX calls
  • Handling of hash-based routing (for single page applications)
  • Templating
  • Reusable controls
So... essentially it does everything that Knockout and Backbone do, and then some.

So enough of that, let's get to some code.

Here is the basic code required to do something simple. In this case, I've made a little "Hello World" app that allows the user to reverse the name with a click.

/* Declare your main module.
 * this is your angular application module*/
var app = angular.module('myApp', []);

/* register a controller */
app.controller('MainCtrl', function($scope) {
  /* $scope is what is used for two-way binding */
  $scope.name = 'World';
  
  /* declare a method to call from a click in our markup */
  $scope.reverseName = function () {
    $scope.name = $scope.name.split('').reverse().join('');
  };
});

And here's the markup required:

<!doctype html>
<html ng-app="myApp" >
<head>
  <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.0.2/angular.js"></script>
  <script src="app.js"></script>
</head>
<body ng-controller="MainCtrl">
  <p>
    <label>Name</label>
    <input type="text" ng-model="name"/>
    <button ng-click="reverseName()">Reverse</button>
  </p>
  Hello {{name}}!
</body>
</html>


Here's a demo to try that out. (I used Plnkr for this, I recommend Plnkr for playing with Angular over JSFiddle because it works a little better for Angular)



Pretty straight forward, I think. That sums up the basics, really. The VERY basic basics. I'll be showing a lot more features as time goes on, but this is a good starting point I think.

Fears of Angular's "Custom HTML Tags"


One of the primary complaints I read online from developers that really don't understand angular is over it's use of "non-standard HTML" for directives. I'll address that a little more when I post about directives and reusable controls, but I wanted to make it a point to say something in my first post about Angular on the matter: Angular actually only uses "non-standard" HTML tags as placeholders. Those placeholders are in fact replaced by Angular with standard HTML tags (presuming the author of the Angular app in question is writing valid HTML). More on that to come, but it's really not as scary as it looks, and it's so powerful, it's definitely worth a look.

Sunday, October 14, 2012

Validating A Custom Control in Angular JS with Angular's Built-In Validations

So I recently ran into an issue where I wanted to create a custom control that was basically two inputs that output to one piece of data. The issue was that sometimes it was required, and other times it wasn't, but I didn't want to require just one field or the other. I wanted the whole control to register as required. The control in question was a month/year date control, which is was a little complicated, logic-wise. So to illustrate the basic idea, I put together a quick plunker of the idea with a custom addition control.


Here are the basic steps to follow:
  1. Create a new directive.
  2. The directive must require: 'ngModel', this does most of the angular wiring for you.
  3. The link function of the model must do two things:
    1. Set a $watch on the item in the directive's scope you want to use as the control's value and use $setViewValue() to set the value of the view for ngModel.
    2. Set the $name for ngModel. This is used for validation purposes like myForm.fieldName.$error.required

Here is the directive for the custom addition control.

app.directive('blCustomSum', function(){
  return {
    restrict: 'E',
    require: 'ngModel', //important!
    template: 
      '<div>' + 
        'A <input ng-model="a" ng-change="add()"/> +  ' + 
        'B <input ng-model="b" ng-change="add()"/> = ' +
        '{{c}}' +
      '</div>',
    controller: function($scope) {
      // just a controller method to add the numbers
      // and set the value of c.
      $scope.add = function(){
        if(!isNaN($scope.a) && !isNaN($scope.b)) {
          $scope.c = parseFloat($scope.a) + parseFloat($scope.b);
        } else{
          $scope.c = '';
        }
      };
    },
    link: function(scope, elem, attr, ctrl) {
      // ctrl is what has been set up by the ngModel directive.
      // wire up c to be the value of ngModel for this control
      scope.$watch('c', function(value) {
        ctrl.$setViewValue(value);
      });
      // set the name for ngModel and validation.
      ctrl.$name = attr.name;
    }
  };
});


And here is an example of how to use it in template markup:

<body ng-controller="MainCtrl">
<form name="myForm">  
  <bl-custom-sum ng-model="sum" name="sum" required></bl-custom-sum><br/>
  <span ng-show="myForm.sum.$error.required">required</span><br/>
  form valid: {{myForm.$valid}}
</form>
</body>






Friday, September 14, 2012

JavaScript - Promises, Promises

Now that the web development field is shifting more and more to the client, asynchronous calls in JavaScript are becoming more and more frequent. Many times recently, I've seen scenarios where there are callbacks of callbacks of callbacks in order to load multiple items, which are often independent of one another, prior to some other code executing. I like to call this code "The Flying V".

FLYING V!!!

$.ajax(options)
   .success(function(x) {
      $.get(url, function(y) {
         $.post(url, data, function(z){ 
            doSomethingHere();
         });
      });
   });

What's going on in this scenario is pretty common: There is a bunch of data needing to be loaded, and/or other asynchronous tasks, and the author isn't quite sure what to do to have something fire only when it's all done.

Promises to the rescue. Believe it or not, most popular JavaScript libraries already implement a Promise pattern: JQuery does and so does AngularJS. So if you're using one of these you're in luck, all you need to do is leverage a little known feature of your favorite library.

Here is an example of Promises in JQuery:

$.when(
  $.ajax(options),
  $.get(url),
  $.post(url, data)
).then(function() {
  doSomethingHere();
});

But what if you don't have access to this? What if you're using the best library ever: Vanilla JS? Well, then you can roll your own, or use this simple promise code I put together. I put it up on GitHub, so use it, fork it, blame it, whatever. It's a pretty small amount of code ~480 bytes minified.

Simple promise


Here's a sample of it being used:




This is an extremely simplistic implementation. There are more robust and mature implementations out there. Kris Kowal's Q is a great example of this. It's my understanding that his work is what AngularJS based their promise system off of. I'm sure Mr. Kowal would probably find my simple implementation laughable, but it's made to be lightweight and do one thing.

Sunday, August 26, 2012

ASP.Net Web API Error Handling, HTTP Status Codes, and You

This is really 502 - Bad Gateway
I'm pretty sure it's not the cat's fault.

EDIT: I actually recommend against a lot of what I'm saying here now. Web API endpoints shoudl always be returning HttpResponseMessage, and those messages could always be created by Request.CreateResponse();  I'll write more on this later.


So it seems to me that not a lot of people have figured out what they should be doing when they want to throw an error from their Web API, but give something back to the client that contains some sort of information about what happened. If you're here you might be here because you've realized that simple throwing any old exception from your Web API results in a "500: Internal Server Error" with exactly nothing in the body of the response that might explain to the client what went wrong.

There are a few things at play here. The quick and dirty version is you're probably throwing the wrong type of exception, and returning the wrong type of status code. Let me explain:


You need to be throwing HttpResponseException


When you throw just any old error, ASP.Net interprets that as an error in the operation of your web application. In other words, it thinks (and rightfully so) that you've experienced an "internal server error". As such, it just sends out a 500 error. You might think, "Well great, but why doesn't it send out the message in my exception? Why can't they do that for me? Shouldn't that be done for me?" Well, actually no, you don't want to send an explanation with a 500 error, but I'll get to that.

If you throw an HttpResponseException, ASP.Net knows that you're attempting to throw an error that you'd like to communicate back to the client with specific information. This, however, does not mean that you should use a 500 error, or that there were all of a sudden be a body in with that 500 error. Because there shouldn't be.


You should be returning 4XX errors, not 5XX errors


The reason the developers of this technology opted not to send the contents of your exception out with a 500 error is because 5XX errors are meant to be sent out as a notification that an error occurred on the server, and there's nothing the client can do about it. Think of it like that annoying "engine maintenance" light on the dashboard of your car, if you're getting that, just pounding on the gas harder isn't likely to do anything to fix the issue. It's an internal issue.

400 errors are your friends. 4XX errors are there to state that an error occurred and there's something the client can do about it. There is a whole list of them, but the two you'll probably need the most are 404 and 400. 404 everyone knows... "not found". This can be returned when content isn't found. For example if a user queries from some widget by id, and the id is invalid... your app didn't find it, so 404.  400 is for a bad request. That means the server got the request okay, but there was something wrong with it and it shouldn't b e resent unless something about it changes. (I'm sure somewhere the authors of RFC 2616 want to slit their wrists after reading my grotesque butchery of their words).

For a more complete list read the definitions yourself. Just be sure to actually read the definitions, some of them have rather generic sounding names but very specific meanings. For example, "406 - Not Acceptable" which actually means that the type requested in the Accept header isn't something the server can return, so if the client requested "text/fibbertyjibbets" rather than "application/json" they're S.O.L. and they're going to get a 406. Anyhow, the point is, read the definitions before you use them. Odds are, most of the time, 400 and 404 will do.

So 4XX errors will actually return a reason along with them, as well as a body that offers some explanation. This is way better than a 500: Internal Server Error with no explanation. And it's also the more correct response.


But how to do this in ASP.Net Web API?


Since ASP.Net Web API can return a variety of content types: XML and JSON for example, I think the most appropriate thing to do is return a very simple string with the basic reason in it. It can be interpreted easily by any client, and it's simple to implement.

What I did was add the following methods to my custom ApiController base class. Then when I needed to use them, I basically call: throw Conflict("Invalid somethingerother"); or throw NotFound("Record not found"); This does everything I need it to do to return something valid to the client.

/// <summary>
/// creates an <see cref="HttpResponseException"/> with a response code of 400
/// and places the reason in the reason header and the body.
/// </summary>
/// <param name="reason">Explanation text for the client.</param>
/// <returns>A new HttpResponseException</returns>
protected HttpResponseException BadRequest(string reason)
{
    return CreateHttpResponseException(reason, HttpStatusCode.BadRequest);
}

/// <summary>
/// creates an <see cref="HttpResponseException"/> with a response code of 404
/// and places the reason in the reason header and the body.
/// </summary>
/// <param name="reason">Explanation text for the client.</param>
/// <returns>A new HttpResponseException</returns>
protected HttpResponseException NotFound(string reason)
{
    return CreateHttpResponseException(reason, HttpStatusCode.NotFound);
}

/// <summary>
/// Creates an <see cref="HttpResponseException"/> to be thrown by the api.
/// </summary>
/// <param name="reason">Explanation text, also added to the body.</param>
/// <param name="code">The HTTP status code.</param>
/// <returns>A new <see cref="HttpResponseException"/></returns>
private static HttpResponseException CreateHttpResponseException(string reason, HttpStatusCode code)
{
    var response = new HttpResponseMessage
        {
            StatusCode = code,
            ReasonPhrase = reason,
            Content = new StringContent(reason)
        };
    throw new HttpResponseException(response);
}

Here's an example of one of the above methods in use:

public Widget GetById(int id) 
{
    using(var context = new MyDataContext()) 
    {
        var widget = context.Widgets.FirstOrDefault(x.Id == id);
        //if we don't have the widget, throw our HTTP error.
        if(widget == null) throw NotFound("Could not find widget: " + id);
        return widget;
    |
}



You can of course implement this same basic idea however you like, but the principle would remain the same. It's good to have a better understanding of HTTP status codes and what they mean, and it's good to use them appropriately.


What you DO NOT want to do


You do not want to return the guts of a whole Exception object in the body of your error response. You don't need to, or want to be sending the whole stack trace and other sensitive information to any old client that makes a bad request. It's just not a good idea. Stack traces and other such things should be recorded for you by a good logging solution, and they should never, ever be returned to clients. Particularly unknown clients.

You do not want to return the wrong HTTP status codes. Will it break anything? Probably not. Will it make you look like a chump that didn't read the RFC before you used the code? Yes. Don't do it. Don't be a chump.

EDIT: I've changed this slightly. It was really late and I was tired when I was writing this. I meant 400 error, not 409. 409 is when you have a state conflict. Like for example a double submit of some data, or data in an unexpected state that is causing an issue. It's could be valid for nearly any error I suppose, but it seems a little more specific tied to the state of some data than 400.

Wednesday, August 22, 2012

Just Salting And Hashing Your Passwords Isn't Enough

I was tempted to reference that Angelina Jolie movie here... But I resisted.

I won't go over this in too much detail, because that's already been done by many a development blog, but the basics are:

  • Never store plain text passwords, store hashes. Storing plain text passwords is a cardinal sin, when a hacker gets ahold of this (assume the worst), they'll have a list of emails and favorite passwords. This is gold to them. They'll get into gmail accounts, and from there it's all over.
  • Require some level of complexity to your passwords. Weak passwords are easier to brute force... duh. Enforce a policy that ensures a high level of entropy.
  • Salt your passwords prior to hashing. Add something special to each password so it's harder to figure out what it was based off of the hash. A good size salt will also eliminate the effectiveness of rainbow tables.
  • ...With a random salt. Even better, everyone gets their own salt. Salt for everybody!
  • Make it slow on purpose. This one might seem odd to some people. It's a really good idea to make sure your hashing algorithm is good and slow. This is because if it's too quick it makes it easier to brute force. 

Something to know: This security needs to be updated as hardware gets faster. Basically has hardware gets faster, you might want to add iterations to your PBKDF2 step. This is because you'll want to keep the time to compute the hash high enough it's not easily brute forced by current hardware. 200-400 ms on top notch hardware should be more than enough to slow down a brute force attack to the point where it's not feasible.


Microsoft .NET has a built-in implementation of PBKDF2 for iterative stretched key generation and HMAC SHA512 for keyed hash creation. The following code demonstrates such an implementation. I think it's pretty self explanatory, but if not here's the play by play:


  1. Get a salt using RNGCryptoServiceProvider. This is just a cryptographically secure random number generator.
  2. Use PBKDF2  (Rfc2898DerivedBytes) to get a key from the salt and the text over many iterations.
  3. Hash the text with HMACSHA512 using the key and the salt.

A test harness for the Hasher class


The Hasher class

Wednesday, August 15, 2012

Creating A Digitally Signed Security Token For A REST API

Recently I was tasked with securing a Web API. I wanted to use token authentication that didn't require that I look up a user id from that token for every request. In order to do this, I knew I'd have to embed the user id and possibly some other data in the token string and make it tamper-proof (esque).

My solution was to use RSA digital signing to sign a my token string to ensure it hasn't been tampered with. Then I'd have to send the signature as part of the token string in order to have something to check on the other end. Digital signing with RSA uses the private key to sign some data, and then the public key (or private key, if in a safe environment) to verify the signature.

EDIT: I wanted to express the advantage of having a public key, because I didn't really say anything about that in my initial post: A big advantage to using an asymmetric algorithm to do the digital signing is the fact that you can share a public key. If you have third party applications consuming your API, you can assign each of them a key pair, storing their private key yourself and giving them the public key. They can then use the public key to verify your tokens independently without making a call to your API for any reason.

There are a few drawbacks to doing things this way rather than some sort of token where you're looking up the token in a database:


  • The biggest drawback is you're limited in the amount of information that can be attached to your token without making your token huge. How big is too big? Well, that's up to you, really. But I like to keep mine under 200-300 characters or so, depending on how I'm sending it back and forth.
  • The other drawback is you must expire these types of tokens, since there is no way to manually expire tokens like this from the server side. Otherwise once you issue the token, it's good forever until you change your private key.

What follows is a simplified version of my solution. What I did was a bit different, as I did some custom binary serialization to reduce the size of my token string, but overall this is the general idea:




Thursday, July 19, 2012

String.Empty vs "" - How .NET Handles String Instances

For the longest time, I've known people, myself included that used String.Empty to represent "" in code, because back in the 1.0 - 1.1 days, for every String literal you created, you were creating an object in memory. String.Empty was a static reference to the same object, so using that prevented the developer from creating all sorts of empty strings in memory. This is how it was for many years, until something changed, and frankly, I didn't get the memo.

String.Empty and "" are now literally the same thing. In fact, any two strings that match are now the same reference as well. This isn't true for other primitive types, like integers for example, but for strings it is. Have a look:

private static void TestStringInternment()
{
   TestEquality("\"\" and \"\"", "", "");
   TestEquality("\"\" and String.Empty", "", String.Empty);
   var x = "foo";
   var y = "foo";
   TestEquality("x and y", x, y);
   x += "!!!";
   TestEquality("x and y again", x, y);
   TestEquality("0 and 0", 0, 0);
}

static void TestEquality<T1, T2>(string name, T1 a, T2 b) where T1: IComparable where T2: IComparable
{
   Console.WriteLine("Equal: {0}\tSameReference: {1}\t// {2}", a.Equals(b), Object.ReferenceEquals(a, b), name);
}

Output:

Equal: True     SameReference: True     // "" and ""
Equal: True     SameReference: True     // "" and String.Empty
Equal: True     SameReference: True     // x and y
Equal: False    SameReference: False    // x and y again
Equal: True     SameReference: False    // 0 and 0

So as you can see, as long as the values of the strings are the same, they're the actually the same instance. But why is this happening? and how? Well, the why is pretty simple: Strings can be any size in memory, as such, it's probably a good idea to try to manage their memory usage as closely as you can. But how? Again this is pretty simple, since strings are immutable, it's safe to put all variables with matching strings at the same reference, because you know that reference won't change. The CLR actually interns all strings so each string variable points to the same instance in the intern pool.

So why isn't this done with things like int? Int is immutable too! ... I presume it's because an int is only 4 bytes long and has a very small memory foot print, whereas a string can be any number of bytes long, and is almost always longer than an int.

One thing to note, however, is some code will indeed create a new instance of a string object, like StringBuilder for example. Even if it outputs the same value as another string, unless you call String.Intern() on the output, it won't use the value stored in the intern pool. This doesn't mean you need to intern every string you get from StringBuilder, it just means you should be aware that not all strings are referenced from the intern pool.

So, I'll admit it, I didn't know this fun fact for WAY too long. I was aware of the intern pool, but I thought that was something that needed to be done explicitly. Now I know better and I figured I would share with my friends, who if they did know, never corrected me. :P Thanks, jerkfaces. LOL

Friday, July 13, 2012

Life Without JQuery

Before the Resig Singularity occurred on August 26, 2006, it might be a shock to some that many developers actually used plain old JavaScript in their web applications, and had been doing so for a long time.

JQuery is a very powerful tool, no doubt. It puts a nice, shiny API on the clunky, old DOM. Also, its ubiquitous nature means that most of the time, it's already on someone's machine if you're using a CDN or Google APIs to reference the file. When you're using a large amount of DOM manipulation and AJAX calls, especially in a client-heavy app, JQuery is a must. This is an incredibly common scenario, so common that quite often the boilerplate for almost any web application platform comes with a reference to JQuery almost by default. For some reason I always found this a little disturbing.

Is it true that JQuery always makes sense? The answer is plainly no. I realize this blog entry is beating a dead horse, but I think it's worth bringing up again so the idea doesn't get stale. You don't always need JQuery.

But when does using JQuery just not make sense?

  • When you're only doing simple actions on the DOM: If you're only doing something like adding a few numbers together and updating a text field or other element, you probably don't need JQuery.
  • If you're only making a few AJAX calls in an otherwise small application: XMLHttpRequest is your friend. It's not such a bad thing to understand how it works.
  • If you're only using it to bind events to elements: While I admire your desire to keep things unobtrusive, addEventListener and attachEvent work just fine. You'll probably want to make a little helper function or something, but trust me, it can be done without JQuery.
  • When you're using another framework, and you're barely using JQuery: Okay, I'll have to explain this one a little bit. If you're using a framework like AngularJS or Backbone, they have their own Ajax handlers. If you're no longer using JQuery's ajax handlers, it might be worth checking to see how you're using JQuery, and how often it's used. You may just be adding overhead to your application for little benefit.
  • If your application is just small: You don't need JQuery to write document.write('Hello World'); It's a little silly to create an application where the JavaScript libraries it's using are larger than the actual application would be without them.
Replacement parts for JQuery:

Selecting elements from the DOM with a CSS-style selector. You can use document.querySelectorAll, it's supported in IE8 and above and well as good browsers. Basically it allows you do most selectors, but doesn't support all of the fanciness that JQuery does.

var nodes = document.querySelectorAll('#idname .classname');


For binding events you would have to use addEventListener (and for IE7 support, attachEvent).

var btn = document.getElementById('myButton');
var buttonClicked = function(e) {
   alert('clicked');
};
if(btn.addEventListener) {
   btn.addEventListener('click', buttonClicked);
} else if(btn.attachEvent){
   btn.attachEvent('onclick', buttonClicked);
}


For AJAX calls, there is the XMLHttpRequest.

var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function() {
   if(xhr.readyState == 4) { //complete
      if(xhr.status == 200) { //OK
         alert(xhr.responseXML);
      }
   }
};
xhr.open('GET', 'http://sample.com/resource', true);
xhr.send(null);


In the end, it's not always necessary to use JQuery, that said, I highly recommend using it it most cases. It's just important to note that JQuery is really just some really nice wrappers around things you can do yourself, and in some cases you don't need everything that JQuery brings with it.

Thursday, July 5, 2012

Should I Return A Collection Or A Null? It Doesn't Matter.


Modularity, People!


While perusing reddit, I came across this blog post and subsequent comments where people were debating whether or not you should return a null or an empty collection. The point is, it shouldn't matter. Any consuming code shouldn't make assumptions about what it's getting back from anther piece of code. Ever. If it could possibly be null, you check for null, even if you developed the other code and know it can't be null. Why? Because while you know it can't be null, the code, and the rest of the world, doesn't know that. So when you get hit by a bus and some other poor developer updates that GetCollection method to return null, he doesn't break anything that calls it, because everything that calls it is taking on it's own responsibility of making sure it has its ducks in a row.


No malloc is good malloc


Another small point to make here: Instantiating an empty collection just because you assume the other code isn't checking for nulls is simply wasting resources to accommodate bad practice elsewhere.


Summary


When writing a method to return something:

  • Return whatever makes sense, not what you worry someone needs. 
  • You shouldn't be concerned with what the consuming code is doing. (That's not separation of concerns is it?)
  • Do not abuse system resources to be "nice" for other developers. 

When consuming a method that returns something:

  • The code you're writing is responsible for checking to ensure what is received is valid before operating on it. 
  • If it can't be null, check for null. 
  • If it has to be within a certain range, check to make sure it's in a certain range. 
  • You get the idea. Assume nothing about the returned value. 

TL;DR: You should probably be returning null, and when consuming code, you should always confirm returned values before using them.


Wednesday, June 27, 2012

Asynchronous Reads Of Large Files And Fighting The Large Object Heap

I was doing some benchmarking of my event loop architecture last recently, particularly in the realm of reading files, and I saw pretty much what I expected to see in most cases. For small files (< 80KB) synchronous file reads were a hair faster overall. This is to be expected, though. The I/O doesn't take very long so just blocking and waiting for the response is going to be faster than offsetting the work to the OS, then waiting for a callback and all of the overhead that entails. For files larger than that, async file reads read multiple files much faster. Again this is to be expected. What I didn't expect, however, were OutOfMemoryExceptions while reading multiple large files asynchronously. I'd check PerfMon and see everything was just fine. Memory-wise I would have gigs to spare. So what was going on? I stepped through and the errors seemed arbitrary about when they'd happen, but not where they would happen. The error always occurred in the same bit if code, a simple allocation of a buffer.

What was happening was I was allocating my buffer to be the exact same size as my FileStream incoming. Which is fine if I'm only doing it once in a while. You see, when allocating an object that is larger than 85,000 bytes, that object goes immediately onto the Large Object Heap, and garbage collection leaves it there for a a lot longer than it normally would. This means that as I'm looping through my files to read, I'm collecting dead space in memory, and once the limits of my allocated space are reached or my LOH gets too fragmented... OutOfMemoryException.

The only good answer here is to use a Stream and read out smaller pieces individually when you're dealing with large files. Also, it's very important to make sure you're doing a good job caching frequently read file data... or any large objects for that matter. In the end it made me question a lot of my I/O use and my buffer allocations. It was certainly a learning experience.

PS: I want to thank Chris Klein from Ares Sportswear for his recommendation to switch the EvenLoop over to a Task and ContinueWith architecture. It sped things up a bit and it cleaned up the code a lot.

Monday, June 25, 2012

Event Loop Architecture in IIS with C# and ASP.Net

Big Changes


I made a few breaking changes to my event loop architecture... but it's alpha, and AFAIK, I'm the only one using it, so whatever. However these changes were to accommodate the new asynchronous HttpHandler I added to handle sending web requests into the event loop via IIS!

The downside to all of this, is like everything with IIS, it requires a bit more setup than I'd like. To get started with using the new ALE HttpHandler, here's a few steps:

Getting Started With ALE in IIS

  1. Start a new Web Project. You can go ahead and gut the project leaving only the Web.Config and the Global.asax.
  2. Remove cruft from the Web.Config and register the AleHttpHandler.
    <?xml version="1.0"?>
    <configuration>
       <system.web>
          <compilation debug="true" targetFramework="4.0" />
       </system.web>
    
       <system.webServer>
           <validation validateIntegratedModeConfiguration="false"/>
           <modules runAllManagedModulesForAllRequests="true"/>
           <handlers>
              <add verb="*" path="*"
                  name="AleHttpHandler"
                  type="ALE.Web.AleHttpHandler"/>
           </handlers>
       </system.webServer>
    </configuration>
    
  3. Add initialization code to Application_Start in the Global.asax.
    void Application_Start(object sender, EventArgs e)
    {
        // Start the event loop.
        EventLoop.Start();
    
        // Get the ALE server instance and wire up your middlware.
        ALE.Web.Server.Create()
           .Use((req, res) => res.Write("Hello World"))
           .Use((req, res) => res.Write("<br/>No really."));
    }
    
  4. Add tear down code to Application_End in the Global.asax.
    void Application_End(object sender, EventArgs e)
    {
        // Shut down the event loop.
        EventLoop.Stop();
    }
    
  5. Have fun.
This has been such a fun project. I am really thankful for all of the feedback and support I've received from my friends and others both on and offline.

Sunday, June 24, 2012

Added Node.js Connect Style "Middleware" To ALE


Inspired by Connect

I've added middleware functionality to ALE. This was mostly to lay the groundwork for a routed server. The idea borrows heavily from the Node.js module Connect. Basically, this just allows the developer to register a series of methods that will be executed in order as the request is processed. In JavaScript, however, this requires the use of a cumbersome "next()" function object that gets passed to each piece of middleware as a means of calling the next. Since we have events and delegates in C#, that isn't necessary, and it's a little cleaner, IMO.

To implementation looks like so:

EventLoop.Start(() => 
{
   Server.Create()
      .Use(DoSomePrepWork)
      .Use((req, res) => {
         var foo = req.Context.ContextBag.Foo;
         res.Write("Foo: " + foo + "

");
      })
      .Use(DoSomeLogging)
      .Listen("http://*:1337/");
});

Where there would be methods for middleware like so:

public void DoSomePrepwork(IRequest req, IResponse res)
{
   res.Context.ContextBag.Foo = "Wut, wut, wut? Socks and sandles!";
}

public void DoSomeLogging(IRequest req, IResponse res) 
{
   Logger.Log("A request was made to: " + req.Url);
}



What is middleware? How will it be used?


Well, in this case it's a poor use of a term that has sort of stuck when it comes to Node.js. Normally middleware would be considered to be some broker software, like a proxy, or a web api, or something like that. In this case it's just some code that is being executed between some other calls.




Because of what it is, I think it should be obvious how this could be used. It could be used for logging, or reporting, or authentication or additional processing of incoming requests or any sort. I realize at this point the server really only does one thing, and that there's no reason you couldn't just code your pre-processing and post-processing directly into the body of a single processing delegate... But I added this to lay the ground work for a routing implementation that will be coming in the near future.

Thursday, June 21, 2012

ALE - Added A Non-Blocking Web-Sockets Implementation

Continuing Work

Tonight after the kids went to bed I put together a non-blocking Web Sockets implementation. After a lot of trial and error I've finally got it sending and receiving over Web Sockets, at least with Chrome. The changes have been pushed to github. As of this post that would be version v0.0.3.0 ... still very alpha. I'm trying to get the method signatures to be something that flows well for what this architecture does. It's sort of a hard thing to do, IMO. Here's what I've come up with for Web Sockets in ALE, it's loosely based on what Node.js does:

EventLoop.Start(() => {
   Net.CreateServer((socket) => {
      socket.Send("Wee! sockets!");
      socket.Recieve((text) => {
          //do something here.
      });
   }).Listen("127.0.0.1",1337,"http://origin/");
});


I think I would like to get the Listen method's signature down to just two parameters that are both string representations of URIs, rather than 3 seperate arguments. I'm also not sure I'm good with how I've implemented the Receive method, which is a little abnormal in the C# world. What it's doing is actually "binding" to an "event", which is really just putting an Action in a List<Action> to be queued to the EventLoop when something is received.

I probably should go back in and clean up the code quite a bit, add comments etc. I've been sprinting so fast and hard whipping this architecture up that I haven't been very diligent with code cleanup or Unit Tests. Mostly just functional testing at this point.

Well... bed time for now...

Attempting To Peer Up The Skirt of Non-Blocking I/O in Windows

First A Thank You

Thanks to some fantastic advice I received from Wim Coenen and svick in my other blog post about my little event loop architecture, I've gone back and made some changes that take advantage of .NET's built in Async I/O, rather than my silly BeginInvoked actions.

Great advice like this is why I started this blog. So I can learn.

A Little Digging

Anyhow, after their comments, I started thinking, "How in the world can any I/O be done asynchronously and not block at least one thread?". I mean, a non-blocking main worker thread, sure, but non-blocking I/O? What is there some sort of magic I don't know about where by the OS will just "know" where to start my code back up again?  Well after some digging I came across I/O Completion Ports, which apparently are such magic. Well... mostly.

My (limited) Understanding of I/O Completion Ports and .NET Async I/O

Bearing in mind I just learned about all of this recently, here's my dumbed down version of what's going on here: Microsoft's asynchronous I/O calls in .NET (e.g: FileStream.BeginRead or Socket.BeginReceive) leverage I/O Completion Ports. An  I/O Completion Port is created with a file handle (which are endpoint handles for anything I/O... e.g. sockets, named pipes, file access, etc) and additional information (seemingly in the form of a pointer to something like an object or a method) about what to call upon completion. I followed the calls around in DotPeek starting at FileStream.BeginRead, which only gives me a very fuzzy idea that this is what is happening, when combined with the specs of I/O Completion Ports mentioned above. This is because most of the real magic happens inside native calls I can't see. Frankly, even if I could see them I doubt I have the smarts the figure out what they're doing quickly, or the patience to try to unravel the mystery.

ALE Updated to 0.0.2.1

As I stated above, I completely overhauled my implementation of non-blocking I/O to leverage .NET's asynchronous I/O methods. Everything should still be up on github. I've also made some changes to how the EventLoop is used. Now to start the EventLoop and being working with it it looks something like this:

EventLoop.Start(() => {
   Server.Create((req, res) => {
      res.Write("<h1>Hello World!</h1>");
   }).Listen("http://*:80/");
});

I also removed a lot of other asynchronous helpers that would block. I think my future implementations of Async calls that don't leverage .NET native async stuff will simply Pend more events on the event loop. I'll probably implement that soon.

Thanks again to the people who gave me feedback via email, facebook, in person, etc!

Tuesday, June 19, 2012

An Event Loop Architecture in C# similar to Node.js

I Really Like Node.js


As some of you know I've been working with Node.js quite a bit recently. There are a lot of things I like about it, and a few things I don't. I'm not going to get into all of that now. I think it's a great tool, and I'll continue to use it. But thinking about it's shortcomings had me wondering if an architecture like that was possible in my "home language" of C#.

So while I was showering, or pooping, or some bathroom related event (all of my best thoughts are usually in a bathroom I think), I came up with an idea of how I could implement an Event Loop in C#. The idea is simple enough, start a thread (or maybe more than one if I want) that pulls Actions off of a FIFO Queue and executes them. The second step would then be to implement I/O in a non-blocking way.

Why An Event Loop?


Well, simply put, it's non-blocking. Which means threads are used to their fullest potential. With other, synchronous architectures, threads chug along doing their thing until they need to wait for I/O (reading something from a disk, waiting for something over the network, waiting for user input, etc). At that point they block, which means they're sitting their doing nothing. With an Event Loop architecture, the thread is plugging away at bits of code (events) that have queued up. When one of those nasty I/O steps needs to be done, the architecture sends that off to some other thread in a thread pool, then continues processing what's next in the queue. The end result is that the main processing thread does not block and uses its thread to it's fullest potential.

I realize there are much, much, better explanations of the Event Loop architecture out there, and I've probably butchered the explanation to some degree, but I think that gets the general idea across.

Introducing ALE


So I created a new project, which I've open sourced on github called ALE (Another Looping Event)... I'm not sure it's the best name, and I'm up for suggestions, but ALE... beer... beer is good. Seemed good enough. Especially for what started off as a simple toy project or proof of concept.

This project thus far represents a brainstorm. A lot of fun effort, but it's an experiment at this stage, very alpha. What I'm really hoping for is your feedback. If I'm doing something wrong, tell me. If I could do something better, tell me. If you'd like to contribute, let me know. This has been a fun project to play with and I'm pretty excited about it.


Here is a basic example of how you can implement a simple web server:

UPDATE: I've added an asynchronous http handler so IIS can handle the incoming requests and send them into the event loop. More about that here.

using ALE.Http;

//Start the EventLoop and start the web server.
EventLoop.Start(() => {
   Server.Create((req, res) => {
      res.Write("Hello World");
   }).Listen("http://*:1337/");
});


Here's an example of some file system interaction:

using ALE.FileSystem;

//Start the EventLoop so it can process events.
EventLoop.Start(() => {
   //Read all of the text from a file, asynchronously.
   File.ReadAllText(@"C:\Foo.txt", (text) => {
      Console.WriteLine(text);
   });
});


An example of a simple web socket server.

using ALE.Tcp;

EventLoop.Start(() => {
   //Create a new web socket server.
   Net.CreateServer((socket) => {
      //send data to the client.
      socket.Send("Wee! sockets!");
      //set up callbacks for receiving data.
      socket.Recieve((text) => {
          //do something here.
      });
   }).Listen("127.0.0.1",1337,"http://origin/");
});

I've gone through the trouble of adding a few other things, a basic SqlClient implementation, for example. In the near future I'm planning on implementing some Entity Framework integration, Web Server middleware (ala Node.js's Express modules), web request routing, and Web Sockets.

But first thing is first, I'm pleading my much smarter developer friends to tell me what I'm doing wrong. I realize it's light on documentation (there is really none, haha)... but have a look and play around. It's pretty simplistic.

---
EDIT: Comments and recommendations below are dead on and I did a little research and wrote a bit about my findings here.

---
EDIT 2: Updated the syntax to reflect current project state.

---
EDIT 3: Updated the examples to include a web socket server.

Thursday, June 14, 2012

Don't Like My Language? Well You're A Dumbass

Okay, I kid, I kid. Sort of...

Experts Become Polyglots

This last weekend I attended the Pittsburgh Tech Fest. I met a lot of good people, and I heard a lot of really good talks. It started off with a keynote speaker named "Doc" Norton, whose named inspired shuddering fears of an old anti-virus tool I used to use that would eat up all of my CPU at random... but I digress: He had a very good talk to start off the day. He spoke about what it means to be an expert programmer and talked briefly about becoming a "polyglot" (or a speaker of many languages) of programming. He talked about how when someone becomes an expert in one language, it's often a good idea for them to move on and learn other languages, because when the do that, they learn new paradigms, new methods and new ideas for solving problems. In the end, it actually makes them better at all languages they use to be well versed in more than one language.

This really spoke to me, because while I definitely don't believe I'm an "expert" in any one language, since reaching a certain point with C#, I've been trying to dive more into other languages and get a deeper understanding of them. The first language I decided to do that with has been JavaScript. This is mostly because I know it's something I'll use whenever I'm doing web development.

JavaScript/Node.js Haters

Since starting to really try to master JavaScript, I've been finding myself running into a lot of other people's opinions on JavaScript and particularly JavaScript when it comes to Node.js. It seems that, especially in circles outside of Microsoft developers, there is a very low opinion of JavaScript and Node.js. At least two of the speakers at the Pittsburgh Tech Fest, that were talking about nothing to do with Node.js, made it a point to mention they didn't like Node.js. When pressed on their reasoning, they offered up mostly uninformed opinion, but one thing they both said was "because it's JavaScript". When asked why that was a hindrance, they'd say something to the effect of "the scoping is terrible" or "there's no strong typing".

Languages Are Like Onions... 

Sorry, I just watched Shrek with daughter the other day. The point I want to make about "other languages": Programming languages, all of them, have a specification. That specification may include things like dynamic typing (no strong typing), or a different scoping of variables than the language you're used to. This doesn't make that language "incorrect" in it's own implementation, it just makes it different from your favorite language. Each of these features are generally implemented by some very smart people for very specific reasons. The choice to use loose typing, for example, is just a choice saying "hey, I trust the developer that is using this knows what he's doing and doesn't need his hand held in the form of compiler errors while dealing with multiple data types". The choice to scope variables to functions, rather than any set of curly brackets, is just that, a choice.

Your Generalizations Are Bad... Allow Me To Generalize

A general dislike for an entire programming language, in this author's opinion, just amounts to a lack of understanding of that language. All too often I think it probably amounts to a little insecurity as well. Or fear of the unknown. It's hard to say. Some languages are verbose to the point of being a little annoying (CGI or VB comes to mind). Some languages don't have very good frameworks associated with them (ASP Classic anyone?). Some languages are downright archaic (COBOL, RPG make my eyes bleed). Some languages are stuck in one OS environment (C#, VB.Net, and I'm not counting Mono for now). Other languages just lack features and want you to throw interfaces on everything (I'm looking at you, Java). Some languages feel like they've been cobbled together haphazardly from remnants of PERL and a thousand open source projects (*cough*PHP*cough*). Offended? Oh no? I didn't touch on your favorite language? Damn. I'll try harder next time. I was just trying to illustrate, a complaint can be made about any language.

Can't We All Just Get Along?

... the point is, they're all good languages. All of them. And I'm happy to learn more about them and try them out, and you should be too. They all have strong points and weak points. What language should you choose for your next project? Whatever you want.

Sunday, June 10, 2012

Helpful JQuery Selectors: Select By Regular Expression, >, < and More

I've created a small library of custom JQuery selectors I thought I would share.

The code below adds the following selectors to JQuery:
  • :btwn(x,y) - selects elements from a collection of elements between two indices and y. So if $('div') was to return ten elements, $('div:btwn(1,4)') would get the three divs between indexes 1 and 4. Another way to do this with the default JQuery selectors is like so: $('div:gt(0):lt(3)') which gets all indexes after 1, then gets all indexes from that set before 3. It's just a different way to approach the problem.
  • :regex(test) -  selects elements whose text contents match the regular expression in test. This value can be surrounded by quotes if need be.
  • :startsWith(str) - selects elements whose text contents start with str. This value can be surrounded by quotes if need be.
  • :endsWith(str) - selects elements whose text contents end with str. This value can be surrounded by quotes if need be.
  • :attrgt(name, val) - selects elements by testing the attribute named name to see if it is greater than the value supplied as val.
  • :attrlt(name, val) - selects elements by testing the attribute named name to see if it is less than the value supplied as val.
  • :attrgte(name, val) - selects elements by testing the attribute named name to see if it is greater than or equal to the value supplied as val.
  • :attrlte(name, val) - selects elements by testing the attribute named name to see if it is less than or equal to the value supplied as val.
  • :attrregex(name, test) - selects elements by testing the attrbute named name with the regular expression supplied to test. The regular expression may be surrounded with quotes.

Download the minified source or view the development files on github.

Wednesday, May 23, 2012

ASP.Net MVC App Structure: Using Cassette For JS and CSS

I can't tell you how many times I remember having to put my finger in one of the holes and wind the tape back in.
I still love these things. I'm not sure why.

What Is Cassette?

Cassette is a bundling and minification tool that is very powerful and extremely easy to implement. Long before MVC4 Beta had bundling and minification, Cassette was already doing it better, IMO.




Advantages Of Using Cassette

  • Easy to debug! Bundling and minification is only performed debug="false" is set in the Web.config. 
  • No more need for @section. There will be no more reason to render a section to your layout to plop your <script/> and <link/> tags in.
  • Tracks JavaScript file dependencies. Cassette uses a nice little comment at the top of each .js file to manage what scripts need to be loaded first.
  • Automatically prevents false caching. We've all seen the issue where we've updated some script file and our browser is just dying to hang on to some old version. Cassette gets around this by appending a hash of the file(s) to the reference URL, so every time you change a file, you're pretty much guaranteed your browser will pull down a new copy, but at the same time unchanged files will remain cached.
  • Handles JavaScript templates. Cassette works well with a variety of JavaScript template providers. It actually creates a script to precompile your templates, renders a script tag that loads that script, and hosts the script. It's very slick, but I'm going to try to focus on just the JS and CSS aspects of Cassette today.

Getting Started With Cassette

First you need to install Cassette. There is a nice quick start guide on their site. Personally, I just right click References in my web project and select Manage NuGet Packages. Then I search online for Cassette and install it. Once it's installed, it will have set up your Web.configs, added references, and added a class file called CassetteConfiguration.cs. 

Once the NuGet package is installed, you're going to want to take a look at CassetteConfiguration.cs. What's going on in here is there are a few method calls to set up the root folders for your scripts, templates ans styles. JavaScript templates are disabled by default. As of this writing, it comes initially set up to minify, but not bundle, JavaScript and CSS when debug="false". You can do a lot from this area, configure bundles via file searches, all sorts of things, set up CoffeeScript, LESS, etc. for that see their documentation. Generally I just do the following:

C# - from CassetteConfiguration.cs

public class CassetteConfiguration : ICassetteConfiguration
{
    public void Configure(BundleCollection bundles, CassetteSettings settings)
    {
        bundles.Add<StylesheetBundle>("Content");
        bundles.Add<ScriptBundle>("Scripts");
    }
}


Your Folder Structure Is Now Free!

Now that Cassette is going to be bundling everything in production, I like to structure my folders for css and JavaScript to mirror my Views directories. So if I have a view that requires a script called /Views/Foo/Bar.html, I'm going to have a JavaScript file at /Scripts/Foo/Bar.js, and maybe a style sheet at /Content/Css/Foo/Bar.css. The world is your oyster now, because once Cassette is done with this, it won't matter where you have your files or how many of them there are, really. Also, for support methods in JavaScript I tend to break them out by their namespace names (for more information on namespacing in JavaScript have a look here).

Where To Place .JS and .CSS References In Markup

I always put my CSS files in the <head/> tag, and my JS file references just before the closing </body> tag. I do this because I want the CSS to be loaded and processed right away so the styles can be applied to the objects in the DOM as soon as possible. I also want my JavaScripts to be one of the last things loaded because most browsers will only pull down two or three external resources at a time, and I don't want all of the images on the page to wait for my JavaScripts to get pulled down, when 99% of the time my JavaScripts aren't even going to be executed until the entire document is ready anyhow.

Razor - _Layout.cshtml


@{
    Bundles.Reference("~/Scripts/jquery.js");
    Bundles.Reference("~/Scripts/jquery-ui.js");
    Bundles.Reference("~/Scripts/foo.js");

    Bundles.Reference("~/Content/Site.css");
    Bundles.Reference("~/Content/themes/jquery-ui.css");
}
<!DOCTYPE html>
<html>
<head>
   <title>@ViewBag.Title</title>
   @Bundles.RenderStylesheets()
</head>
<body>
   @RenderBody()
   @Bundles.RenderScripts()
</body>
</html>


Where Not To Place JavaScript and Styles In Markup

  • Do not use inline scripts, do not use inline styles. I'm not going to get into the "whys" here, becasue that's a whole different post... For now, I'll just leave it at "you want to seperate your styles from your behaviors from your markup". Be unobtrusive.
  • Do not put css or styles in your Partial Views! This is a really common mistake I see. What's going to end up happening here, is you're going to end up loading the same scripts or css twice, which can cause unexpected behavior.
  • Try not to put JavaScript in the <head/> tag if you can avoid it. It's not always avoidable, but for reasons I previously mentioned, put them just before the closing </body> tag.

Setting Up JavaScript File Dependencies For Cassette

This is pretty simple as long as you stay on top of it. What needs to be done here is you need to add a little comment to the top of your JS file referencing any .js files that are prerequisite to that file. For example:

JS - Cassette Reference Example

/// <reference path="Scripts/jquery.js" />

/* be sure to note that above there are three "///"
 * and also the path="" is root relative even without
 * the leading "/". */

$(function(){ 
  alert('Omg, wee! The document is ready!!!11');
});


Referencing Your .js and .css Files With Cassette

This is probably the easiest part of all. In a C# code block in your .cshtml files, I recommend at the top of the file, just as I demonstration in my _Layout.cshtml example above. All you do is just add a line that says Bundles.Reference("~/Path/Your/File.js") for each .js or .css file you want to add into a bundle. The syntax is the same for both .js and .css files. It gets a little trickier if you want to reference an external file, although I'm not sure why you'd want to do that, they do detail referencing external scripts in their documentation.

That's really all there is to it! Now I'd recommend setting debug="true" and "false" in your Web.config and checking the page out in your browser a few times so you can marvel at the differences.

Tuesday, May 22, 2012

Structuring an ASP.Net MVC App

Your way is WRONG!

I'm kidding, I'm kidding... haha. Everyone has their own way to do this, and I'm no exception I guess. After 5-6 MVC projects, I feel like I've come to a formula that really works for me and I wanted to share. I've decided to do a series on my personal rendition of structuring an ASP.Net MVC application. I'm going to focus on the current state of things, which is MVC 3 for now. I'll do MVC 4 again maybe when it's out of beta. It has a lot of nice features that may or may not replace what I'm going to talk about here, so this content may be a bit dated in a few months. Such is tech.

Project Structure

The first thing I like to set up is a project structure with the following projects. For sake of  this series, I'm going to use a parent namespace "NS". The projects I add right off are as follows:

  • NS.sln - the solution file.
    • NS.dll - A class library to house global code and helpers, I always have these, so I feel okay adding it right off.
    • NS.Web.csproj - Our MVC app itself.
    • NS.EF.dll - A class library for our actual data access, for the sake of this series, I'll be using EntityFramework, but it could be anything, really.
    • NS.Data.dll - A class library that will house our model and our data access methods that build them. More on this later.

Project Dependencies

Here's where, IMO, a lot of developers drop the ball. I really don't want my web project to be dependent on Entity Framework, or SQL, or MySQL, or MongoDB, or anything like that. I want a level of abstraction, especially when that sort of abstraction comes so easily. Do to this I set up my project dependencies accordingly. Below is a diagram of how my project reference each other.



Wiring Everything Up

The first thing I do is go into NS.EF and generate my entities using EntityFramework. If you're using something else, I think it's a good idea to flush out your actual database access or ORM or whatever right now. It's clearly a critical piece.

Once that is done it's time to move forward to the UI and work out a Controller Action. I have a requirement to create a page that writes out an unordered list of "Foos" from my data. So I'm going to create a FooController.cs file in my Controllers directory, then I'm going to create an Action on it called "List". I'm also going to create a view file in /Views/Foo called List.cshtml.

Beyond that we need to set up our data access for the controller, and that means hopping back to our NS.Data project and adding and IFooService interface, then implementing it. The important thing here is we don't want any consuming code to be able to create an instance of the IFooService implementation directly, so for our implementation, which I'll call FooService, I'm going to leave the modifier off of it, making it internal, whereas IFooService will obviously be public. So how does the web project create an instance of IFooService if it can't see the implementation? With a little static class I'll call ServiceFactory. ServiceFactory will have a static method on it (also called FooService) that creates a FooService (the implementation) and returns IFooService.

Now we head back to our controller for a little dependency injection: I'm gion to add a protected readonly field to stick an IFooService in and call it IFooService. Then I'm going to add two contuctors, one that accepts my dependency, or IFooService, and other that is parameter-less that creates and passes to the other constructor an IFooService via the ServiceFactory.FooService() method.

Now I'm going to set up a Repository pattern for our actual data access via EF. This is done so we can use dependency injection to test our building of the models in FooService. So to do this, I'm going to add a IFooRepository, a FooRepository (the implementation) and a RepositoryFactory to the NS.EF project in the same manner I created and implemented the Services and ServiceFactory. This repository will be to do all of the accessing of EntityFramework. This way, EF is abstracted out enough we can test without having to mock anything more than just a repository implementation.

I want to state this here: I don't like code generated repository patterns. Sure they might save you a little time up front, but they hardly ever do everything you need them to do, they generate a lot of code you'll never use, and IMO, they don't add any value, if anything they just add one more level of complexity to what is really a simple problem: Getting and setting data in some datasource. The underlying ORM is more than enough, I think.

Once that is done, I'm able to go into my Action method and call my GetFooList() method, which returns a model (housed in NS.Data as well), then return an ActionResult with the proper view and the model we just got.

Below you'll see some pseudo-code to help you get the basic idea.

C# - FooController.cs

public class FooController : Controller
{
   /*
    * our service from NS.Data
    * */
   protected readonly IFooService FooService;

   /* 
    * A little ctor magic for dependency injection. 
    * ServiceFactory is from the NS.Data namespace and 
    * provides static methods for instantiating a new service.
    * */
   public FooController() : this(ServiceFactory.FooService())
   {
   }
   public FooController(IFooService fooService) 
   {
      FooService = fooService;
   }

   /*
    * Here we have our Action method.
    * */
   public ActionResult List() 
   {
      var model = FooService.GetFooList();
      return View("List", model);
   }
}

C# - IFooService.cs (NS.Data)

public interface IFooService
{
   FooListModel GetFooList();
}

C# - FooService.cs (NS.Data)

/* Our actual implementation is going to be left internal
 * (NOT PRIVATE) classes with no specifier are internal.
 * Because we don't want outside code creating instances of 
 * our implementations directly. */
class FooService : IFooService
{
   // our repository dependency.
   protected readonly IFooRepository FooRepository;

   // hot dependency injection.
   public FooService() : this(RepositoryFactory.Foo())
   {
   }
   public FooService(IFooRepository fooRepository)
   {
       FooRepository = fooRepository;
   }
   // do the work of getting the data and building the model.
   public FooListModel GetFooList()
   {
      var model = new FooListModel();
      // get our entities from the Repository.
      var fooEntities = FooRepository.GetFoos();
      /* ... build the model out here from the entities ... */
      return model;
   }
}

C# - ServiceFactory.cs (NS.Data)

public static class ServiceFactory
{   
   /* This is where we're going to get our instance of IFooService
    * from for our Controllers. */
   public static IFooService FooService()
   {
      return new FooService();
   }
   /* this one allows dependency injection */
   public static IFooService FooService(IFooRespository fooRepos)
   {
      return new FooService(fooRepos);
   }
]

C# - IFooRepository.cs (NS.EF)

public interface IFooRepository
{
   FooEntity[] GetFoos();
}

C# - FooRepository.cs (NS.EF)

class FooRepository: IFooRepository
{
   /* In here we're going to do the work of getting 
    * some entities from Entity Framework. I like to
    * return arrays or lists or something that forces me
    * to enumerate my queryable to make sure I don't abuse
    * EntityFramework. */
   public FooEntity[] GetFoos()
   {
        using(var dbcontext = new WhateverEFContext())
        {
            /* do whatever you need to do to get your IQueryable */
            return someQueryable.ToArray(); //make sure you only enumerate ONCE!
        }
   }
}


C# - RepositoryFactory.cs (NS.EF)

public static class RepositoryFactory 
{
   public static IFooRepository Foo()
   {
      return new FooRepository();
   }
}

So this is probably all about as clear as mud now, huh? There is just a LOT to go over. I'd recommend googling some articles on why we should use dependency injection, or why it's a good idea to separate your database access code from your Controller Actions. I'd rather not get into that myself, as it's a lengthy discussion and frankly, most of my friends that read this blog are already all over those concepts, in most cases more so than I am.

When I'm all finished with this series, perhaps I'll put together a sample project and host it somewhere for people to pull down and use. There are still several things that I do in my MVC apps I want to touch on, such as Script and Style location and placement, Bundling and Minification with Cassette, other nice to haves like Logging and Profiling, etc. Probably nothing new to everyone, but I'm hoping someone that reads this finds it helpful at some point.

EDIT: I've changed a few things about the way I've named my namespacing and structured things. I had forgotten to implement a repository pattern on the data access side of things, and that caused the Service implementations to be a bit hard to test. I'm going to try to edit this post to rectify that.

Tuesday, May 15, 2012

Advanced Asynchronus JavaScript Function Call With Event Subscriptions

Per a request for my good friend, Adam Hew, who enjoyed my last post, I've thrown together a quick implementation of an asynchronous function caller with JQuery ajax-esque event subscriptions. 1.7+ of course. ;).  The challenge here was to put together something where multiple subscribers can be notified when the code is complete or if it errors or if it's successful. So here goes:

My Implemention Of An Async Caller With Event Subscription

var async = (function() {
    var AsyncResult = function(caller) {
        // set up some events.
        var completeEvents = [];
        var successEvents = [];
        var errorEvents = [];
        // and some event triggers.
        var onComplete = function() {
            for (var i = 0; i < completeEvents.length; i++) {
                completeEvents[i]();
            }
        }
        var onSuccess = function(result) {
            for (var i = 0; i < successEvents.length; i++) {
                successEvents[i](result);
            }
        }
        var onError = function(err) {
            for (var i = 0; i < errorEvents.length; i++) {
                errorEvents[i](err);
            }
        }
        //a little prep work.
        var self = this;
        this.caller = caller;
        //set up our subscription methods.
        this.complete = function(fn) {
            completeEvents.push(fn);
            return self;
        };
        this.success = function(fn) {
            successEvents.push(fn);
            return self;
        }
        this.error = function(fn) {
            errorEvents.push(fn);
            return self;
        }
        //call the function async and kick off triggers.
        setTimeout(function() {
            try {
                onSuccess(caller());
            } catch (err) {
                onError(err);
            }
            onComplete();
        }, 0);
    };
    return function(fn) {
        return new AsyncResult(fn);
    };
})();

//Now try it out!
async(function() {
    return 2 + 2;
}).complete(function() {
    console.log('complete!');
}).success(function(result) {
    console.log('result is ', result);
}).error(function(err) {
    console.log('ERROR!', err);
});


//or another way (showing multiple subscriptions):
var r = async(function() {
    return 'foo';
});
r.success(function(data) {
    console.log(data);
});
r.success(function(data) {
    alert(data); //alert it too? sure.
});​

So what's going on here? First thing is first, I needed to figure out what to do. Well to follow JQuery's pattern, I had to have an object to return from my method that had subscription methods built into it. And those subscription methods need to return a reference to their owner... so I needed to create the AsyncResult class. But, I didn't want my consumers just creating instances of that class willy-nilly, so I needed to wrap that in some closure to hide it from all but the code that I want to create new instances of it. So I wrapped the class entirely in a one-time called function that returns a reference to another function that does the work of creating the class. Now, inside the class, I did a very simple event pattern. Basically I added some arrays for each event to hold handlers for those events. Then I added some triggers (onComplete, for example) that could be called to loop through the handlers in those arrays when they were called. Lastly, I added some subscription methods that would essentially just push handlers onto those arrays. Then I was pretty much done except for adding the setTimeout trickery to kick off the passed function in an asynchronous fashion. Lastly, I added the "caller" property to AsyncResult as a nice-to-have so a consumer could get the instance of the function that was passed to the async method.

I hope that's all clear enough. I feel like I muddled my way through that explanation a little.

EDIT: I failed to mention one important thing, setTimeout will wait for any current block of code to complete before it will actually fire! So, if you have a setTimeout in a function with some huge loop that takes forever, that setTimeout, even if set to an interval of zero, will not fire until the executing code finishes, loop and all.