Wiring up dependencies in node.js ‚Äď wrap up of my best practices

Quite a lot of friend developers that make their first baby steps with node.js keep asking me about the “right” way to share database resources respectively dependencies in general among controllers, business logic or service objects. Now, there’s actually one universal wisdom in the world of node.js: there is no “right” way of doing things (have a look at the controller samples from express.js ).

There are two good reasons why: First, most node.js frameworks including its core are very basic, offering modular and highly simplistic, fundamental but reusable code. They don’t make any assumptions about how you will use them, leaving it up to you to wire everything together as you want. Second, Javascript as a functional language offers many more options to shoot yourself in your foot than class oriented / C-like languages. What’s missing (and maybe even unwanted) in the node.js universe is a sophisticated framework like Spring, Rails or Symfony that offers best practices under the hood you could rely on.

In this article I’d like to tell my story of how I first tried to transfer my knowledge from these “sophisticated” frameworks (usually IoC containers) to express.js just to recognize the far simpler, nodejs-“native” approach to achieve the same results. The first three examples I’m presenting are just evolutions of ideas I had until I noticed that in node.js you can take much simpler ways – so be warned that some of the upcoming code might seem unnecessarily blown up: the resolution you might want to comment on can be found at the very end.

The old world

Connecting to some kind of server-side database layer is a pretty straight forward task in your favorite non-JS language. Have a look at some pseudo-code:

Connection con = Driver.connect("some://driver.specific@connection.string:for/your/database");
ResultSet result = con.query("SELECT foo FROM bar WHERE id=1337");
Entities entities = new ArrayList;
foreach(result as r) { 
    Entity e = Driver.hydrate(r); 
    entities.add(e);
}   

If you’re a real node.js greenhorn, let me quickly explain why this kind of code will never work on node. Since Javascript (really!) executes in one thread at a time lengthy operations like connecting or querying a database must not block the main execution loop – otherwise the application won’t be able to respond other incoming requests. Instead of waiting for the database to return a connection (like e.g. Java does), V8 will immediately execute the subsequent code. Regarding the example above con might not have been initialized yet when `con.query is executed. In Javascript you handle this kind of asynchronous events using callbacks, so in node.js the above example could be pseudo-coded as:

var entities = [];

driver.connect(connectionString, function(err, con) {
    var hydrator = new driver.Hydrator;
    con.query("SELECT ...", function(err, result) {
        _.each (result, function(r) {
            hydrator.hydrate(r, function(err, entity) {
                entities.push(entity);

                //business logic here

            });
        });
    });
});

You immediately notice the main problem in functional code: some call it the “Javascript pyramid of death”. It’s built on subsequently registered callbacks. I won’t dig deep into solutions to that issue here (promises are currently the best solution and they’re widely adapted ) but I want you to have a look at the first line that connects to the database and serves as the root of our pyramid. On platforms like PHP or Java you would have a single place where you connect to your database. Then you would either wire that connection to clients that want to use it or ask some container to hand it over back to you (and initialize it if it wasn’t before). Let’s have a look at that pattern in a container managed environment (far from being exactly IoC, but you should get the idea):

“Java” pseudo-code:

class Container {
    resources = array();  
    con = null;

    public function getConnection() {
        if (null == this.con) {
            this.con = Driver.connect(...);
        }
        return this.con;
    }
}  

class FooController {

    @Inject("container")
    private container;

    db = this.container.getConnection().query("...");
}

The idea behind this pattern is: there’s some godlike mega-registry (the IoC-Container) knowing, configuring and instantiating all your dependencies. If you need something you either annotate your dependencies and let Mr Registry inject it at startup time or call Mr Registry and ask for a fully configured and initialized resource (service, bean, you name it).

IoC-like coding in express.js / node.js

Adapting this pattern in node.js leads to rather uncomfortable code. Lets start with an app.js to illustrate that (Again: beware that I wouldn’t recommend to use this kind code but you could definitely do so).

/code

app.js

var express = require("express");
var http = require('http');
var Controller = require('./controller.js');
var Container = require("./ioc.js");

var app = express();
app.use(express.bodyParser());

app.set('port', process.env.PORT || 3000);

var container = new Container();
new Controller(container).route(app);

app.use(app.router);

http.createServer(app).listen(app.get('port'), function(){
    console.log('Express listens on port ' + app.get('port'));
});

ioc.js

var sqlite3 = require("sqlite3");
var Container = module.exports = function() {
    var db = null;
}

Container.prototype = {
    initializeDb: function(db) {
        db.run("CREATE TABLE testing " +
            "(id INTEGER PRIMARY KEY AUTOINCREMENT, " +
             "info TEXT)", function(err) {
            console.dir(err);
        });
    },
    getDb: function() {
        if (null == this.db) {
            this.db = new sqlite3.Database(":memory:");
            this.initializeDb(this.db);
        }
        return this.db;
    }
}

controller.js

var Controller = module.exports = function(container) {
    this.container = container;
}

Controller.prototype =  {
    indexAction: function(req, res) {
        var db = this.container.getDb();
        db.all(
           "SELECT * FROM testing",
            function(err, rows) {
                res.json(rows);
            });
    },
    fooAction: function(req, res) {
        var db = this.container.getDb();
        db.get(
            "SELECT * FROM testing WHERE id = $id",
            {$id:req.params.id},
            function(err, row) {
                res.json(row);
            });
    },
    addAction: function(req, res) {
        var db = this.container.getDb();
        db.run(
            "INSERT INTO testing (info) VALUES ($info)",
            {$info: req.body.info},
            function(err) { //this: statement
                res.json({ lastId: this.lastId});
            });
    },
    route: function(app) {
        app.get('/foo', this.indexAction.bind(this));
        app.get('/foo/:id', this.fooAction.bind(this));
        app.post('/foo', this.addAction.bind(this));
    }
}

If you run this example and access GET /foo it’ll be called back with the error message “Error: SQLITE_ERROR: no such table: testing“. Notice that initializeDb has been called by getDb but while it tried to execute the initial CREATE statement, the single thread already went on and executed the index action. Since the initializeDb callback has not been handled yet (it’s going to be handled right after the index action has finished) the SELECT statement cannot find the table yet.

To remedy that situation, we could use callbacks in our client code, like this:

fooAction: function(req, res) {
    this.container.getDb( function(db) {
        db.get("SELECT ...", {}, function() { ... });
    });

That way we’re littering the container’s getDb interface with a callback parameter. Instead, people choose to use the so called promise pattern at this point. There are some libraries out there that get the job done; Q is one of the most powerful of them and besides many other features it offers a deferred interface that deals with that problem.

IoC-like code with promises

/code

container.js

Container.prototype = {

    initializeDb: function(db, callback) {
        db.run("CREATE TABLE testing " +
            "(id INTEGER PRIMARY KEY AUTOINCREMENT, " +
            "info TEXT)", callback);
    },
    getDb: function( ) {
        var deferred = Q.defer();
        if (null == this.db) {
            var self = this;
            this.db = new sqlite3.Database(":memory:");
            this.initializeDb(this.db, function(err) {
                if (err) {
                    deferred.reject(new Error(err));
                } else {
                    deferred.resolve(self.db);
                }
            });
        } else {
            deferred.resolve(this.db);
        }
        return deferred.promise;

    }
}

controller.js

    ...
    indexAction: function(req, res) {
        this.container.getDb().then( function(db) {
            db.all("SELECT * FROM testing",
                function(err, rows) {
                    res.json(rows);
                });
        });
    }
    ...

Starting the app after container setup has finished

That looks only little better but definitely not really usable yet. So lets take a last attempt to fix things up. Let’s tell the container to initialize everything and start the application once the basic initialization has finished.

/code

app.js

var container = new Container();
container.initialize().then( function() {
    new Controller(container).route(app);
    app.use(app.router);

    http.createServer(app).listen(app.get('port'), function(){
        console.log('Express listens on port ' + app.get('port'));
    });
});

container.js

Container.prototype = {

    initializeDb: function(db, callback) {
        db.run("CREATE TABLE testing " +
            "(id INTEGER PRIMARY KEY AUTOINCREMENT, " +
            "info TEXT)", callback);
    },
    getDb: function( ) {

        if (null == this.db) {
            var deferred = Q.defer();
            var self = this;
            this.db = new sqlite3.Database(":memory:");
            this.initializeDb(this.db, function(err) {
                if (err) {
                    deferred.reject(new Error(err));
                } else {
                    deferred.resolve(self.db);
                }
            });
            return deferred.promise;
        } else {
            return this.db;
        }

    },
    initialize: function() {
        var dfd = Q.defer();
        this.getDb().then( function() {
            dfd.resolve();
        });
        return dfd.promise;
    }
}

controller.js

var Controller = module.exports = function(container) {
    this.container = container;
    this.db = container.getDb();
}

Controller.prototype =  {
    indexAction: function(req, res) {
        this.db.all("SELECT * FROM testing",
            function(err, rows) {
                res.json(rows);
            });
    },
    ...
}

This looks like a good start for configuring action controllers with container based dependencies on a very low level. This concept can be extended to have a configurable container, to load a dependency tree and prepare singleton services, even to lazy load services using proxy objects. Et voila: welcome back to the good old Spring / Symfony world. You can write your code that way, it’s going to work pretty much as expected (I worked that way for quite some time)

Out of the rabbit hole

Wake up, Alice – you’re not in Wonderland anymore. Here’s what I’m doing in node.js these days; pretty, lean and simple.

/code

DB.js

var sqlite3 = require("sqlite3");
var db = new sqlite3.Database(":memory:");

db.run("CREATE TABLE testing " +
    "(id INTEGER PRIMARY KEY AUTOINCREMENT, " +
    "info TEXT)");

module.exports = db;

app.js

var express = require("express");
var http = require('http');
var Controller  = require('./controller.js');
var DB = require("./DB.js");

var app = express();
app.use(express.bodyParser());

app.set('port', process.env.PORT || 3000);

app.use(app.router);

app.get('/foo', Controller.indexAction.bind(Controller));
app.get('/foo/:id', Controller.fooAction.bind(Controller));
app.post('/foo', Controller.addAction.bind(Controller));

http.createServer(app).listen(app.get('port'), function(){
    console.log('Express listens on port ' + app.get('port'));
});

controller.js

var DB = require("./DB.js");

module.exports = {
    indexAction: function(req, res) {
        DB.all("SELECT * FROM testing",
            function(err, rows) {
                res.json(rows);
            });
    },
    fooAction: function(req, res) {
        DB.get(
            "SELECT * FROM testing WHERE id = $id",
            {$id:req.params.id},
            function(err, row) {
                res.json(row);
            });
    },
    addAction: function(req, res) {
        DB.run(
            "INSERT INTO testing (info) VALUES ($info)",
            {$info: req.body.info},
            function(err) { //this: statement
                res.json({ lastId: this.lastId});
            });
    }
}

The “magic” that might seem unusual to mature developers coming from class oriented environments lies in the “module” concept that’s one of node’s cornerstones. A module is the encapsulation of state and behavior, it can be used in a service-like fashion like I did in the last example. Notice, that I’m requiring DB.js in app.js without even using it. That way node.js executes the code inside once and keeps the reference in module.exports – the database therefore is prepared when a controller is using it. Well, not exactly: if the preparation / setup of resources takes really long, the application is up before initialization has finished (try wrapping the database in a timeout, I left the code as comment in the repo). But what’s more important: taking this approach you don’t have to take much care about your dependencies but rather can access them from any module you want simply by requiring them. The db variable in DB.js always refers to the very same instance.

tl/dr Lessons learnt: node.js and expressjs don’t propose a structure for managing dependencies and wiring them up with your clients. While you could write your code in a rather classical way, it’s mostly much simpler to use node’s builtin concepts.

PS. I just found this stackoverflow question that’s underlining the concept I tried to explain here in short. Good, that I’m not alone with my opinion ūüėČ

Using module.exports the “right way” for service instances and IDE introspection

I’m using service objects in node.js that are responsible for database operations on business entities and also perform some kind of ¬†low level business logic if needed. Recently I was refactoring my code and came up with this pattern which I currently consider a “best practice” of doing things.

Service Objects that should perform asynchronous actions on remote services like querying a database ¬†must get their resources at some point. Naively you could instantiate each service every time you need it, provide them a (fresh) link to your database (that you might want to store globally or in an application instance that you’re handing around). Now, in Javascript, respectively in a node.js / CommonJS environment there’s a better way of doing that: the module. It is not too obvious for developers coming from a Java-like background that those modules can (but don’t have to) be used for instantiating “singleton” services and can deal as single activation points to set your service objects up with their resources. So here’s an example (please note that I’m omitting some “real world” db logic, the mongo connection is there only for illustration):

Your “service module”, responsible of getting a user from a database (“UserService.js”)


var UserService = function() {

this.db = null;
this.collectionName = "users";
this.collection = null;

}

UserService.prototype = {

connect: function(db) {
if (this.db != null)
return;

this.db = db;
this.collection = db.collection(this.collectionName);
}

getUser: function(id, callback) {
this.db.findOne({_id:id}, callback);
}

}

module.exports.UserService = new UserService();

Your main module (“app.js” or whatever you want to call it)


var db = require('mongojs'),
UserService = require('service').UserService

db.connect("mongodb://a-fancy-server:27109/master");

app.set('mngdb', db);
UserService.connect(db);
var xId = new mongoSkin.ObjectID("1a2b3c4b5e...");
UserService.getUser( xId, function(err, doc) {
console.dir(doc);
}

Notice that you’re initializing (“connecting” in this case) the single instance of UserService in your main module. That means that it is ready to go in any other module where you would like to use it. That’s a good solution for immutable service instances that don’t depend on any state but only on some resources like database connections or global settings.

In the rare case where you’d like to have another user service you could export the constructor from your service module as well (in UserService.js)


...
module.exports._UserService = UserService;

and if you need another one, you can (anotherModule.js):

var db = require('mongojs').connect("mongodb://a-crazy-server:27110/samples");
var myCustomUserService = new require('UserService')._UserService();
myCustomUserService.connect(db);

There’s one little twist that I found when playing around that might be helpful when you try this “pattern” on your own. You might be tempted to omit the service’s name in the module.export like so (UserService.js):

....
//don't do that
module.exports = new UserService();

because then you could (yetAnotherModule.js):

var userService = require('UserService');
userService.getUser(...)

That code is definitely working. But note, that you a) cannot export anything else (e.g. the constructor) now  and b) your IDE might not be able to resolve the methods (getUser) of that instance (e.g. IntelliJ WebStorm cannot).

When an old Mustache partial wants {{.}} give him {.:v}

Javascript is full of weird “special” features. I’m reusing my Mustache-templates using Hogan.js¬†for Express on the server and on the client side. Lately I noticed ¬†that the ¬†client-side ICanHaz template loader uses Mustache 0.4.0, a version that’s highly outdated. Let me show you what I tried.

When traversing arrays the {{.}} template variable comes in handy: it’s replaced with the “current” value. If you want to render an array of strings you do this (Mustache 0.4.0):

$ npm install mustache@0.4.0
$ node

var Mu = require('mustache');
var opts = {attrs: ["weird", "crazy","awesome"]};
var tpl = 'Javascript is {{#attrs}} {{.}} {{/attrs}}';
Mu.to_html(tpl, opts); //Mu 0.4.0

> 'Javascript is  weird  crazy  awesome '

More advanced, lets use partials (preregistered sub templates):

var partials = {part: "<b> {{.}} </b>" };
var tpl = 'Javascript is {{#attrs}} {{>part}} {{/attrs}}';
Mu.to_html(tpl, opts, partials);

> 'Javascript is  <b> weird </b>  <b> crazy </b>  <b> awesome </b> '

What if you would like to render the part-partial on its own for only one string, say “queer” ?:

var qu = "queer";
Mu.to_html(partials.part, qu)

>¬†‘<b> ¬†</b>’

Solution:

var qu = { ".":"queer" };
Mu.to_html(partials.part, qu);

>¬†‘<b> queer </b>’

First, I thought this is a major “flaw” in Mustache but on the server side I’m using the latest version (0.7.0) so I never noticed this behaviour before. Good thing: you can bring your own Mustache for ICanHaz if you don’t want to go the “.” way ūüôā Going to do that now.

In 0.7.0 you simply do:

var Mu = require('Mustache');
var partials = {part: "<b> {{.}} </b>" };
var qu = "queer";
Mu.render(partials.part, qu);

>¬†‘<b> queer </b>’

Lessons learned: object property names can be “.”. Another funny side-notice: you can also use keywords as property names, e.g. I recently have seen someone returning {“return”:”true”} in his AJAX calls. I doubt that this is good practice…

Dealing concurrently with long running / blocking tasks in node.js

Today I spent nearly half of my day digging through solutions for the critic that¬†Ted Dziuba came up with in his article “Node.js is Cancer” from Oct. 1st 2011. There he states that it’s far easier as expected for newbies to write code that blocks node’s single-threaded event-loop. First I thought there might be a simple workaround but as it turns out Ted isn’t so wrong at all – but there are solutions. So, Ted’s critic is based upon a fairly naive (cmon, that’s just an example for a far more complex situation) example. He computes Fibonacci numbers like so:

function fibonacci(n) {
   if (n < 2)
      return 1;
   else
      return fibonacci(n-2) + fibonacci(n-1);
}

If you invoke this piece of code for n>40 you’ll notice that node.js will take quite some time to compute the result (mainly due to the non-cached results in the recursive computation). That’s not a shortcoming of JavaScript or V8 – if you write similar code in PHP or Python you’ll also end up waiting for the result (here’s a blog article doing exactly that). The difference is: while PHP on FastCGI or Java on an application server are making use of a pool of threads that utilize all CPU resources you have node.js relies on a single threaded execution model (the so called “event loop”). That means that a request that invokes the code above is blocking your node server completely since it the event loop waits until the method returns. To overcome this issue (let’s call it a feature) you usually provide a callback function that is called back after the function has finished its lengthy operation ¬†and continues to write on the output stream. The main execution loop then is not interfered at all.

first idea (not working)

So my first idea (which turned out not to work, so read on if you’re looking for solutions) was to use one of the asynchonous libraries like flow, async, step and then I stumbled upon Q. I read a little about the concept of future return values, the so called promises. This concept wraps your long running code into a function that returns a promise object on the future value. This object comes with certain methods that substitute the callback principle. Here’s a small code change that I did with it:

var http = require ('http')
    ,Q = require ('q');

var server = http.createServer(function (req, res) {
	res.writeHead(200, {'Content-Type': 'text/plain'});
	var promise = Q.call(fibonacci, null, 80);
	promise.then(function(fbRes) {
		res.end("Result:" + fbRes);
	});
}).listen(1337, "127.0.0.1");

The Q library defers the call and returns the promise. We bind an anonymous callback using its then method. But as a simple test with ab shows this is still blocking other requests going to that node.js instance.

second idea – use child processes or the cluster module

Obviously the team around node.js is aware of this “issue”. Read carefully the About-article in the node.js master documentation. In the end it says:

But what about multiple-processor concurrency? Aren’t threads necessary to scale programs to multi-core computers? You can start new processes via¬†child_process.fork()¬†these other processes will be scheduled in parallel. For load balancing incoming connections across multiple processes use¬†the cluster module

So I had a look at those. As it turns out, node’s¬†cluster module is marked as highly experimental so I’d rather not make use of it at all. The child module doesn’t look exactly easy to understand as well so I started Googling again. One of the first hits led me to Sitepen that accounts for a rather promising multi-node module. Unfortunately these guys relied on node’s cluster module. So this module simply fails to execute on a node.js > 0.6. Another company finally found a way but this time it seems to be commercial (but there’s an open source license available): The Fabric Engine. ¬†It compiles Javascript code to the native environment so it can be utilized on the server as well as on the client. Sounds interesting for gaming or highly scientific applications but leads a little too far when it should only solve our Fibonacci cancer theorem. ¬†So maybe we find something in node’s own toolbox? Here comes an interesting StackOverflow article that tries to go the remote VM way which basically spawns another V8 (+10M overhead) that executes the code in another (sandboxed) process. That sounds good but it’s not too easy to hand over the return value from the spawned process back to the handling process. The idea pointed out in the article is to let our fibonacci method write to the output stream and let our master process read from it in a non-blocking manner. When Mr Fibonacci finally writes its result to the stdout we can finally hand the value (which then comes in as String) over to the still open response object and finish it.

Finally: solutions

Good news is: there are solutions. Bad news is: they still don’t come easy. One article you’ll most likely will stumble upon when googling for concurrent Javascript will be the one from Bruno¬†that first explains how the heavily discussed Fibers¬†module tries to address the concurrency problem. The shortcoming of Fibers is that it’s delivered as C++ module which will make assumptions on the underlying OS. Maybe a good solution for homogeneous environments but… you know there are still Windows boxes in the wild. In the end Bruno starts talking about the rather new Threads a GoGo¬†library which natively utilizes threading inside the V8 engine (and therefore relies on a newer release of node.js). One important thing to notice (that Bruno points out, too) is that all threads spawned through this approach are running in their own environment so you won’t have many chances to hand over state between them. Here’s a piece of code that runs asynchronously:

var TAGG = require('threads_a_gogo');

// our CPU intensive function
function fibo(n) {
  return n > 1 ? fibo(n - 1) + fibo(n - 2) : 1;
}

// create a worker thread
var t = TAGG.create();
t.eval("fibo(30)", function(err, result) {
  console.log("fibo(30)=" + result);
});

Bleeding edge approach: web workers

Since some months the WHATWG is ¬†finalizing the definition of web workers that should act as threading model for client side code but of course can run on the server side, too. Here’s a pointer to a library which uses that approach. The main advantage is the standardized way of message and event handling. In TAGG the long running ¬†function returns its result as string value which we can utilize. The web worker approach makes communication between the thread compartments easier by introducing a messaging protocol.

The nextTick solution

Last but not least our friend Prash has refactored the original fibonacci function to be handled in slices which allows it to bind against the event loop ticks. That way it can return CPU resources to the main execution loop. I didn’t test his solution on my own yet but I think this works well without adding any native code or weird library approaches. But let’s face it: the resulting code is a bloated mess and the async callback inside the computation recursion mixes up concerns – after all we only want to compute Fibonacci(40). So IMHO this just deals as an example of how you can decompose computation intensive tasks into asynchronous slices if you know what you’re doing. Prash’s solution also is a good example that in node and Javascript things have to be thought differently: you have to think in functions, instances and callbacks rather than in templated interfaces, classes and listeners.

Conclusion

After all that research I have to admit: node.js really has its shortcomings when you want to execute CPU intensive computations. It definitely is not prepared yet for unobtrusive usage of multiprocessor architectures even though its child, VM and cluster features provide built-in solutions. The most promising solutions are Threads a GoGo and Web workers. Fibers on the other hand seem quite an overkill to me and fully depend on an UNIX alike OS.

One thing ¬†that I definitely want to make clear: this shortcoming is not affecting computation intense database operations. Most of the libraries that I found and that I’m using rely on the non-blocking asynchronous callback mechanisms that make node.js so interesting for highly loaded environments. That means: even if your unindexed MySQL query needs 20 seconds to return node.js will be able to serve other requests in the meantime because the MySQL “driver” will hand over control to the¬†node event-loop ¬†as soon as you fired your query. It simply will call¬†back¬†the open response handler¬†as soon as a result set is available. I also want to point out that node.js encourages you to shift your development paradigm to the client side. You usually will only deliver a basic HTML5 site layout to the client. Afterwards you issue AJAX/J queries to your ¬†node.js backend and render everything on the client side again. Node.js mainly deals as handling layer for incoming events and therefore is not 100% comparable to other execution environments. A Java application server is not imposing the aforementioned threading problems but it will get stuck on the Fibonacci computation, too, as soon as its thread pool is exhausted.

So don’t use node.js the same way you’re using your favorite web technology. Otherwise you will most certainly run into unexpected problems. And no, don’t use node.js for your scientific calculations. If you do and fail doing so, at least don’t call it cancer.

Defining object request handlers in express: obj.bind(obj) to the rescue

If you’re coming from a class-oriented environment of web development you might be used to handler classes that are bound to a dispatching frontdoor and that handle requests. For example I’m used to write something like this in SpringMVC:

@Controller
public class SomeHandler {

	@RequestMethod("/testobj")
	public void handle(HttpServletRequest req, HttpServletResponse resp) {
		resp.println("Hi there");
	}
}

When moving to a Javascript / event oriented backend environment things are a little different. The highly acclaimed and well designed expressjs framework for node.js, that’s based on the middleware-oriented connect framework uses callback functions that are bound to a common routing handler. If you’ve written backend code for expressjs you most likely have been stumbling upon constructs like this:

app.get('/testobj', function(req, res) {
	res.end("Hi there");
});

Instead of encapsulating response behaviour in its own class you are defining anonymous functions and hand them over as handler middleware. No doubt: if you’re going to write your code like this you’ll end up with a giant main script and lots of those anonymous functions.

The idea of encapsulation is also known to Javascript, though, so I started to combine handler functions into handler objects living inside their own modules. Here’s an example:

// modules/testobj.js
function TestObj(name) {
	this.name = name;
}
TestObj.prototype.sayName = function() {
	return this.name;
};
/**
 * handler for /testobj
 */
TestObj.prototype.render = function(req, res) {
	var aName = this.sayName();
	console.log("aName: " + aName);
	res.end(aName);
};
exports.TestObj = TestObj;

The object TestObj is exposed while the methods are added to its prototype. The handler object gets some environment – instead of handing over a simple name in the constructor imagine to hand over a database connection or a facebook client object; you name it. Since the middleware interface of the connect framework only accepts functions with a (req, res) signature (I simplify a little here) you cannot handover your “dependency injected” object but rather have to handover its member / prototype functions. So you are tempted to write something like this:

var to = require('./modules/testobj');
var anObject = new to.TestObj("Stefan");
app.get('/testobj', anObject.render);

A request to /testobj will be sent to the anObject.render middleware that’s supposed to render “Stefan” to the screen (as TestObj is constructed with that name). Instead it shows nothing. If you have a look at your console output you’ll notice that “aName” is undefined.

Without going into too much detail the easy explanation for this behaviour is that by referencing the function it’s loosing its reference to “this” because it’s addressed directly. If you want you can log “this” to the console inside the handler function – it’s going to show the global object (node) since all variables (functions are also variables in JavaScript) are bound to it if there is no other context provided.

The solution is to bind the this context to the function so it’s kept even if called by someone else (namely the connect dispatcher). Luckily this feature has been implemented into ECMAScript5 which node.js is based on so the solution is really easy once you’ve found it:

app.get('/testobj', anObject.render.bind(anObject));

will “rebind” the object reference as this to the .render function and the code works as expected.

Took quite a while to figure that one out. Hope to be helpful!