Thursday, July 19, 2012

String.Empty vs "" - How .NET Handles String Instances

For the longest time, I've known people, myself included that used String.Empty to represent "" in code, because back in the 1.0 - 1.1 days, for every String literal you created, you were creating an object in memory. String.Empty was a static reference to the same object, so using that prevented the developer from creating all sorts of empty strings in memory. This is how it was for many years, until something changed, and frankly, I didn't get the memo.

String.Empty and "" are now literally the same thing. In fact, any two strings that match are now the same reference as well. This isn't true for other primitive types, like integers for example, but for strings it is. Have a look:

private static void TestStringInternment()
   TestEquality("\"\" and \"\"", "", "");
   TestEquality("\"\" and String.Empty", "", String.Empty);
   var x = "foo";
   var y = "foo";
   TestEquality("x and y", x, y);
   x += "!!!";
   TestEquality("x and y again", x, y);
   TestEquality("0 and 0", 0, 0);

static void TestEquality<T1, T2>(string name, T1 a, T2 b) where T1: IComparable where T2: IComparable
   Console.WriteLine("Equal: {0}\tSameReference: {1}\t// {2}", a.Equals(b), Object.ReferenceEquals(a, b), name);


Equal: True     SameReference: True     // "" and ""
Equal: True     SameReference: True     // "" and String.Empty
Equal: True     SameReference: True     // x and y
Equal: False    SameReference: False    // x and y again
Equal: True     SameReference: False    // 0 and 0

So as you can see, as long as the values of the strings are the same, they're the actually the same instance. But why is this happening? and how? Well, the why is pretty simple: Strings can be any size in memory, as such, it's probably a good idea to try to manage their memory usage as closely as you can. But how? Again this is pretty simple, since strings are immutable, it's safe to put all variables with matching strings at the same reference, because you know that reference won't change. The CLR actually interns all strings so each string variable points to the same instance in the intern pool.

So why isn't this done with things like int? Int is immutable too! ... I presume it's because an int is only 4 bytes long and has a very small memory foot print, whereas a string can be any number of bytes long, and is almost always longer than an int.

One thing to note, however, is some code will indeed create a new instance of a string object, like StringBuilder for example. Even if it outputs the same value as another string, unless you call String.Intern() on the output, it won't use the value stored in the intern pool. This doesn't mean you need to intern every string you get from StringBuilder, it just means you should be aware that not all strings are referenced from the intern pool.

So, I'll admit it, I didn't know this fun fact for WAY too long. I was aware of the intern pool, but I thought that was something that needed to be done explicitly. Now I know better and I figured I would share with my friends, who if they did know, never corrected me. :P Thanks, jerkfaces. LOL

Friday, July 13, 2012

Life Without JQuery

Before the Resig Singularity occurred on August 26, 2006, it might be a shock to some that many developers actually used plain old JavaScript in their web applications, and had been doing so for a long time.

JQuery is a very powerful tool, no doubt. It puts a nice, shiny API on the clunky, old DOM. Also, its ubiquitous nature means that most of the time, it's already on someone's machine if you're using a CDN or Google APIs to reference the file. When you're using a large amount of DOM manipulation and AJAX calls, especially in a client-heavy app, JQuery is a must. This is an incredibly common scenario, so common that quite often the boilerplate for almost any web application platform comes with a reference to JQuery almost by default. For some reason I always found this a little disturbing.

Is it true that JQuery always makes sense? The answer is plainly no. I realize this blog entry is beating a dead horse, but I think it's worth bringing up again so the idea doesn't get stale. You don't always need JQuery.

But when does using JQuery just not make sense?

  • When you're only doing simple actions on the DOM: If you're only doing something like adding a few numbers together and updating a text field or other element, you probably don't need JQuery.
  • If you're only making a few AJAX calls in an otherwise small application: XMLHttpRequest is your friend. It's not such a bad thing to understand how it works.
  • If you're only using it to bind events to elements: While I admire your desire to keep things unobtrusive, addEventListener and attachEvent work just fine. You'll probably want to make a little helper function or something, but trust me, it can be done without JQuery.
  • When you're using another framework, and you're barely using JQuery: Okay, I'll have to explain this one a little bit. If you're using a framework like AngularJS or Backbone, they have their own Ajax handlers. If you're no longer using JQuery's ajax handlers, it might be worth checking to see how you're using JQuery, and how often it's used. You may just be adding overhead to your application for little benefit.
  • If your application is just small: You don't need JQuery to write document.write('Hello World'); It's a little silly to create an application where the JavaScript libraries it's using are larger than the actual application would be without them.
Replacement parts for JQuery:

Selecting elements from the DOM with a CSS-style selector. You can use document.querySelectorAll, it's supported in IE8 and above and well as good browsers. Basically it allows you do most selectors, but doesn't support all of the fanciness that JQuery does.

var nodes = document.querySelectorAll('#idname .classname');

For binding events you would have to use addEventListener (and for IE7 support, attachEvent).

var btn = document.getElementById('myButton');
var buttonClicked = function(e) {
if(btn.addEventListener) {
   btn.addEventListener('click', buttonClicked);
} else if(btn.attachEvent){
   btn.attachEvent('onclick', buttonClicked);

For AJAX calls, there is the XMLHttpRequest.

var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function() {
   if(xhr.readyState == 4) { //complete
      if(xhr.status == 200) { //OK
};'GET', '', true);

In the end, it's not always necessary to use JQuery, that said, I highly recommend using it it most cases. It's just important to note that JQuery is really just some really nice wrappers around things you can do yourself, and in some cases you don't need everything that JQuery brings with it.

Thursday, July 5, 2012

Should I Return A Collection Or A Null? It Doesn't Matter.

Modularity, People!

While perusing reddit, I came across this blog post and subsequent comments where people were debating whether or not you should return a null or an empty collection. The point is, it shouldn't matter. Any consuming code shouldn't make assumptions about what it's getting back from anther piece of code. Ever. If it could possibly be null, you check for null, even if you developed the other code and know it can't be null. Why? Because while you know it can't be null, the code, and the rest of the world, doesn't know that. So when you get hit by a bus and some other poor developer updates that GetCollection method to return null, he doesn't break anything that calls it, because everything that calls it is taking on it's own responsibility of making sure it has its ducks in a row.

No malloc is good malloc

Another small point to make here: Instantiating an empty collection just because you assume the other code isn't checking for nulls is simply wasting resources to accommodate bad practice elsewhere.


When writing a method to return something:

  • Return whatever makes sense, not what you worry someone needs. 
  • You shouldn't be concerned with what the consuming code is doing. (That's not separation of concerns is it?)
  • Do not abuse system resources to be "nice" for other developers. 

When consuming a method that returns something:

  • The code you're writing is responsible for checking to ensure what is received is valid before operating on it. 
  • If it can't be null, check for null. 
  • If it has to be within a certain range, check to make sure it's in a certain range. 
  • You get the idea. Assume nothing about the returned value. 

TL;DR: You should probably be returning null, and when consuming code, you should always confirm returned values before using them.