One of our QA engineers put together a Policy Expert themed mini-game using Javascript and HTML5. This game is entirely client side and uses side-scrolling animation.

“I made this game to teach myself Javascript,” said Oliver Bray, the creator of this game. “It’s really effing hard.”

Here’s the game description and link:

Policy Expert Mini-Game: Up!
Navigate the flying house through the treacherous city, avoiding floating walls while collecting as many coins as you can! Earn top score by staying up in the air and gain a higher score by collecting the coins. Watch as day turns to night, while you progress through the increasingly difficult levels.

Click here to play.

Have a go and see if you can break his high scores! Send us a screenshot of your high score, and we might send you a prize.

Having the opportunity to work on so-called ‘greenfield’ projects is not something everyone can boast about. In fact, we will surely be surrounded by legacy software throughout the majority of our long careers. It is almost a certainty that you will at some point come up against software systems that are difficult to change or grow and evolve and that is simply the nature of software development.

The term ‘legacy’ itself is an oft misunderstood concept. Legacy code, in it’s official definition, is code that is used “for the purpose of maintaining an older or previously supported feature”. Legacy code can often be seen in a system where typically only a few know how it works. The term itself can be used to describe these things above but as you make your journey through various jobs, it will start to have it’s own underlying connotations.

Within the context of a mature software organization where the next generation of developers have outnumbered the original authors of the legacy systems, there are often negative undertones associated with legacy systems. The musings of a few freshly hired developers when they speak of legacy code, might be interpreted to mean, obsolete or irrelevant code that was written before their time. To put it more succinctly, legacy code is code that they didn’t write (and is obviously less superior). This kind of short-sighted thinking leads many to abandon legacy systems entirely, regardless of what value they bring or what revenues they generate.

However, there are always pragmatic ways to deal with these problems. In large legacy systems that are inherently brittle due to their age and complexity, the ideas behind being ‘agile’ lend themselves nicely to the dedicated effort of self-improvement. People will frequently advocate incremental refactorings, but that comes with the implied notion that there is no tangible end to the battle. Incremental refactorings follow the notion of the Broken Window theory: leave things in better shape than you found it. While this practice is a great way to maintain a level of code quality, it is not effective approach for building on completely new features.

A couple of ThoughtWorkers went through an exercise where they were asked to deliver brand new features on an old, brittle system and rather than wrestle with that system, they went about it in a more novel way[1]. Rather than spending a few months (or undoubtedly years) to understand an undocumented system and re-write it’s core logic, they looked at the new features to build as a whole new system. This approach takes on what is sometimes lovingly referred to as the ‘wrap the crap’ approach.

Essentially, we take what the system does now and leave it alone. Let it continue to do as it normally does without any further support. Any new interactions with that system must be done through an adapter or façade layer. This layer will ‘wrap’ the legacy system providing a thin transformation of data and encapsulating the details of the legacy system. This may be something like exchanging an obsolete transport protocol (RMI) with a more universally accepted one (REST over HTTP). Or it could mean abstracting out a convoluted messaging protocol (SOAP) with a more standardized one (JSON).

As software professionals, we’ve often condoned the shortcuts used to to endure ‘one more change’ within a legacy system. While this is the quicker way to do get things done in the short-term, it should be no surprise to most of us that all it really does is create  Instant Legacy Code™. Using this approach of retiring the older legacy systems through abstraction in an adapter layer means that your next new features can be just as rewarding as working on brand new mini-greenfield projects. Business owners will rarely accept an entire rewrite of a system, but they would surely be more agreeable to maintaing business as usual operations while progressively building for the next generation.

References

[1] An Agile Approach to a Legacy System, ThoughtWorks Inc.

Just a quick heads up. Epitome, our MooTools-based MVP framework, is now officially `out there`. Feel free to use, hack and contribute back or report bugs.

It also comes with:

… and everything you need to create some cool applications. Happy coding.

Having something like BusterJS test coverage is pretty awesome but sometimes, you want to add automated CI testing as well. This is when Travis CI can come into play – a brilliant platform for continuous integration testing on open source projects on github.

We added it to Epitome and it now has a shiny new ‘badge’ on the README.md that says:

So, how do you go about adding CI to your project?

Travis CI works on top of nodejs for JavaScript testing, so before you begin, you need to define your project in a package.json.

In Epitome’s case, the file is very simple and looks like this:

{
    "author" : "DimitarChristoff",
    "contributors": [
        {
            "name": "Dimitar Christoff",
            "email": "christoff@gmail.com"
        },
        {
            "name": "Simon Smith",
            "email": "ssmith@qmetric.co.uk"
            
        },
        {
            "name": "Garrick Cheung",
            "email": "garrick@garrickcheung.com"
        },
        {
            "name": "Chiel Kunkels",
            "email": "ckunkels@qmetric.co.uk"
        }
    ],
    "name" : "epitome",
    "description" : "Epitome MV* for MooTools",
    "version" : "0.0.1",
    "scripts" : {
        "test" : "node_modules/.bin/buster-test"
    },
    "repository": {
        "type": "git",
        "url": "https://github.com/DimitarChristoff/Epitome"
    },
    "keywords": [
        "mootools",
        "epitome",
        "mvc"
    ],
    "main": "./build/Epitome.js",
    "license": "MIT",
    "devDependencies" : {
        "buster" : "~0.6.0"
    },
    "engines" : {
        "node" : "~0.6"
    }
}

Basically, the only real dependency required to run things is buster.js and we want the newest 0.6.0 version.

Once you have created the file, you may want to test it by running npm install . which should bring busterjs into ~/node_modules. You should add node_modules/* to your .gitignore file.

To test that everything is working locally, first start your buster-server as you would normally:

dchristoff@Dimitars-iMac:~/Projects/Epitome (master):
> buster server &
[1] 64510
dchristoff@Dimitars-iMac:~/Projects/Epitome (master):
> buster-server running on http://localhost:1111

Now, attach a browser if you need to or don’t (if you have node-only unit tests). Then try to see if it works:

dchristoff@Dimitars-iMac:~/Projects/Epitome (master):
> npm test

> Epitome@0.0.1 test /Users/dchristoff/projects/Epitome
> node_modules/.bin/buster-test

Firefox 13.0.1, OS X 10.7 (Lion): ................................................................................ 
                                  ..                                                                               
9 test cases, 82 tests, 82 assertions, 0 failures, 0 errors, 0 timeouts
Finished in 0.973s

Your tests should pass. You are halfway there! Next step is, go to the Travis CI website and login with your github account. Pick your repository form the list and enable CI on it, this will install a commit hook and enable the integration.

To define how Travis should work and test your project, we need a simple YAML file called .travis.yml:

before_script:
  - export DISPLAY=:99.0
  - sh -e /etc/init.d/xvfb start
  - sleep 5
  - node_modules/.bin/buster-server &
  - sleep 5
  - firefox http://localhost:1111/capture &
  - sleep 5


script:
  - "npm test"

language: node_js

node_js:
  - 0.6

Basically, you are saying: we want to use X-server (exports display), start the capture server and fork it in the background, then launch firefox on the capture URL and wait for 5 more seconds. The sleep time is a little ambiguous and fragile but there seems to be no event that can confirm the success of launching and capture.

When done, it will run npm test. That’s it! What happens afterwards when you commit is very simple: Travis will clone your repo, parse the YML file, run npm install to get deps (buster) and run the test script. If it exits with code 0, your build is considered a success.

Here is how it looks on the Travis console:

… Then a lot of submodule fetching until:

You can actually see the last build status on the Travis site by going here

See what tests we actually run here.

Good luck and have fun coding in the knowledge that your tests will always run, no matter what. Now if only there was something that could write the tests for you…

Plenty of posts out there on the benefits of information radiators and Big Visible Displays but I thought I’d share our implementations.

We currently have 6 radiators which help raise the team’s awareness of various aspects of our ecosystems – infrastructure, application performance, build monitor, logging behaviour and financial metrics.

There’s no reason other teams, such as marketing, could not have their own real-time radiators, the barriers to entry are quite low.

 

 

Hardware:

  • 1 ErgoMounts EMVPF1300-1X4-21B quad floor stand
  • 2 Dell Ultrasharp U2412M 24 inch IPS Widescreen LED Monitors
  • 2 H-Squared Mounts for Mac mini Unibody
  • 2 Mac minis
  • 2 Mac mini DisplayPort to DVI Adapters

Software:

  • Google Chrome extensions Tab revolver and auto-refresh
  • Sentry dashboard for exception logging / tracking
  • Geckoboard dashboard for Company financial metrics and Pingdom report displays
  • Highcharts and Codahale Metrics (developed by Yammer) for realtime application performance monitoring
  • JUnitBenchmarks for performance testing

 

Just a quick announcement if you are following the development of our MV* library. We have added a templating engine to Epitome. You are, of course, able to use your own choice of a templating engine like Mustache, Slab or Handlebars. We just thought it would be nice if some basic logic came `out of the box`.

The natural candidate to include was the underscore.js templating engine due to sheer popularity and familiarity. _.js’ templating function is actually based on an old post by John Resig but if has one serious shortcoming: when you pass on a template referencing an object key that is not in the model, it throws a reference error and dies. Here is an example that will fail your app:

<input type="text" name="surname" value="<%=surname%>" />
...
_.template(tpl, {
    name: 'Dimitar'
}); // throws due to failing `with(obj) {  ... surname }` reference.

So, Epitome.Template fixes that and makes it more simple to use. The spec is basically 2 types of tags (instead of _.js’ 3):

This is some <%=what%> literal code, referencing a property in an object
<% if (obj.isAdmin) { %>
This is only visible if the <%=object%> passed contains a truthy `isAdmin` property
<% } %>

The appearance of the escape tags can be changed to your liking via the `options`, eg, use `{{` `}}` and `{{=` `}}` instead. Just change the regex in `options.evaluate` and `options.normal`. The rest is fairly self explanatory and commented throughout.

Epitome.Template = new Class({
	// a templating class based upon the _.js template method and john resig's work
	// but fixed so that it doesn't suck. namely, references in templates not found in
	// the data object do not cause exceptions.
	options: {
		// default block logic syntax is <% if (data.prop) { %>
		evaluate: /<%([\s\S]+?)%>/g,
		// literal out is <%=property%>
		normal: /<%=([\s\S]+?)%>/g,
		
		// these are internals you can change if you like
		noMatch: /.^/,
		escaper: /\\|'|\r|\n|\t|\u2028|\u2029/g,
		unescaper: /\\(\\|'|r|n|t|u2028|u2029)/g
	},
	
	Implements: [Options],
	
	initialize: function(options) {
		this.setOptions(options);
		
		var unescaper = this.options.unescaper,
			escapes = this.escapes = {
				'\\': '\\',
				"'": "'",
				'r': '\r',
				'n': '\n',
				't': '\t',
				'u2028': '\u2028',
				'u2029': '\u2029'
			};
		
		Object.each(escapes, function(value, key) {
			this[value] = key;
		}, escapes);
		
		
		this.unescape = function(code) {
			return code.replace(unescaper, function(match, escape) {
				return escapes[escape];
			});
		};
		return this;
	},
	
	template: function(str, data) {
		// the actual method that compiles a template with some data.
		var o = this.options,
			escapes = this.escapes,
			unescape = this.unescape,
			noMatch = o.noMatch,
			escaper = o.escaper,
			template,
			source = [
				'var __p=[],print=function(){__p.push.apply(__p,arguments);};',
				'with(obj||{}){__p.push(\'',
				str.replace(escaper, function(match) {
					return '\\' + escapes[match];
				}).replace(o.normal || noMatch, function(match, code) {
					// these are normal literal output first, eg. <%= %>
					return "',\nobj['" + unescape(code) + "'],\n'";
				}).replace(o.evaluate || noMatch, function(match, code) {
					// the evaluating block is after so <% logic %>
					return "');\n" + unescape(code) + "\n;__p.push('";
				}),
				"');\n}\nreturn __p.join('');"
			].join(''),
			render = new Function('obj', '_', source);
		
		if (data) return render(data);
		
		template = function(data) {
			return render.call(this, data);
		};
		template.source = 'function(obj){\n' + source + '\n}';
		
		return template;
	}
});

You can play with it live in this tinker: https://tinker.io/76e62

In other news, Epitome itself has now reached the phase where a build is supported and can actually be used safely. More on the progress later…

Nice post from @energizedwork about how developers effectively “live” in the codebase – it certainly feels like that at times! But it is a cornerstone of a great development team that everyone is very thorough in their approach to refactoring and best-practice. An old analogy about doorsteps springs to mind – which I won’t print here – but it’s all about respect for your colleagues ultimately!

http://www.energizedwork.com/weblog/2012/06/developers-are-users-of-the-code

It also notes that a more flexible attitude helps in business thinking (within a software delivery environment) so as to explore and experiment with different and potentially better solutions, thereby maximising value to the customer and in return the business. I think this is a reflection of how far and fast the technology industry has moved, relative to others perhaps, over the last 10 years or so. But it’s a very important point and worth trying to build that culture in the long-term as the best ideas rarely always come from a single source.

Reminded me of a nice definition I read some time ago for the term “hacker”.

This is a MooTools JavaScript library tutorial
Difficulty: moderate
Skill: intermediate+
Requires: Prior knowledge of javascript, MooTools and MooTools Classes
Read: Part 1: Creating your own MVC-like data model class in MooTools
UPDATE: This tutorial is now hosted in its own GitHub repository called Epitome. You can view the Groc documentation of the class on the gh-pages branch here. Please ignore the attempts to modularise it for AMD/CJS as this is very much work-in-progress. To run the example/model-demo-sync.html, you need the project to reside within a web server as it relies on the XHR subsystem. A sample PHP script is provided that returns a JSON object and a mod_rewrite rule in .htaccess is added so that it can work with REST-like URLs.

In our previous installment, we covered the basics of the creation of a simple Model class that can allow you to have managed data key->value pairs with appropriate events and exports.

You can view the base Model class in its entirety (as it was before this installment) here: https://tinker.io/beceb (out of date)

Better yet, look at it on github instead!

As a side note, if we ever need to change or update Epitome, GitHub is the only source that guarantees to be up-to date, trying to keep blog posts, jsfiddle and tinker.io examples current is not exactly fun.

Today, we will try to take this further by making sure the Model can synchronize with the server RESTfully. To do so, we need to first design some spec to work against.

  • Go with the API already established in Backbone etc – add a .sync(mode, model, options) method to the Model prototype
  • Each model now NEEDS to have an `ID`
  • The model needs to have a single Request instance so we get no versioning issues
  • Each model now needs to have a `urlRoot` that points to the server REST api
  • Export CRUD methods on the model proto so it can work as Model.read() etc.
  • Define custom accessors so that `urlRoot` does not go in the model but is usable

So, where to begin? By creating our buster tests. To make them pass, there are many patterns available we can use. The most obvious one is to simply add the new method sync (because everyone else does) to the prototype of the Model class and make it available to all Models. I don’t particularly like this solution (even if it’s the easiest), as I often work with Models that are for display only and need not be sync’d.

Pattern #2 would be to create a new object with the method definitions and then Implement that into the model prototype or the model instance itself. Although this is more modular and flexible, it’s subject to a specially constructed object literal with `mutator` keys like `before`, `after` and so forth.

Pattern #3 can be using something like a proper Class Mutator. A `Class mutator` is basically a specially crafted constructor object key that can change how the Class is being constructed. Eg. ones are Extends, Implements and Binds (from mootools-more). You could add something like `Syncs: [Storage, Sync]` – though it’s not easy to write, adds a footprint to Class and does not offer good readibility as there is little to indicate how important such a key really is.

Pattern #4 is what we will write instead, because it’s the most MooToolsy ™. We will create a new Class called Model.sync that Extends Model. Easy as pie…

To start off, we need to define a small map of CRUD to normal GET/POST. Keep in mind that the MooTools Request class already exports all that is good so any Request instance supports instance.delete() and so forth. We will use that and copy the pattern into our own class. You can store that in a closure or put it on an existing object for convenience.

javascript

// define CRUD mapping.
Model.methodMap = {
    create: 'POST',
    read: 'GET',
    update: 'PUT',
    delete: 'DELETE'
};

That’s that. It will know what to call and the Model will have the nice methods instead. Now onto creating the Class. If you have never extended a mootools class, it’s easy, we start with the skeleton:

javascript

Model.sync = new Class({
    
    Extends: Model,

    initialize: function(obj, options) {
        // needs to happen first before events are added,
        // in case we have custom accessors in the model object.        
        this.setupSync();
        this.parent(obj, options);
    }
});

It’s fairly self explanatory – we define a new constructor function (aka, Class) that will call on the Model one but when the `initialize` function runs, it will also call our new method `setupSync`.

Reviewing our specification, before we write more code, we need to ensure that we cover the 2 special cases we now have to do with the model `id` and `urlRoot` properties. We do so by using our custom properties accessors and add them to the new prototype:

javascript

Model.sync = new Class({
    
    Extends: Model,

    properties: {
        id: {
            get: function() {
                // always need an id, even if we don't have one.
                return this._attributes.id || (this._attributes.id = String.uniqueID());
            }
        },
        urlRoot: {
            // normal convention - not in the model!
            set: function(value) {
                this.urlRoot = value;
            },
            get: function() {
                // make sure we return a sensible url.
                var base = this.urlRoot || this.options.urlRoot || 'no-urlRoot-set';
                base.charAt(base.length - 1) != '/' && (base += '/');
                return base;
            }
        }
    },

    options: {
        // by default, HTTP emulation is enabled for mootools request class. we want it off.
        emulateREST: false
    },

    initialize: ...

We have now ensured that any model.get('id'); will always return an id. Though this may not be the smartest decision in terms of recovery when no ID is set, but it guarantees uniqueness of the data model. If you ever want to make the sync to be to say, localStorage or sessionStorage or window.name instead, even a fake ID will come in handy in occluding your data.

We have also prevented model.set('urlRoot', '/something/') from ever making it to the actual data model. We also define a getter for the urlRoot that ensures it has a trailing `/` because we will likely be appending the model Id after the URL when syncing.

Next up. Add the method that creates the Request instance (we call that in the initialize):

javascript

setupSync: function() {
    var self = this,
        rid = 0,
        incrementRequestId = function() {
            // request ids are unique and private. private function to up them.
            rid++;
        };

    // public methods - next likely is current rid + 1
    this.getRequestId = function() {
        return rid + 1;
    };

    this.request = new Request.JSON({
        // one request at a time
        link: 'chain',
        url: this.get('urlRoot'),
        emulation: this.options.emulateREST,
        onRequest: incrementRequestId,
        onCancel: function() {
            this.removeEvents('sync:' + rid);
        },
        onSuccess: function(responseObj) {
            self.fireEvent('sync', [responseObj, this.options.method, this.options.data]);
            self.fireEvent('sync:' + rid, [responseObj]);
        },
        onFailure: function() {
            self.fireEvent('sync:error', [this.options.method, this.options.url, this.options.data]);
        }
    });


    // export crud methods to model.
    Object.each(methodMap, function(requestMethod, protoMethod) {
        self[protoMethod] = function(model, options) {
            this.sync(protoMethod, model, options);
        };
    });

    return this;
} 

Essentially, nothing too special – created our request instance and also exposed .create(), .read(), .update() and .delete() as methods that are available on the model itself, passing on arguments to .sync().

Although these methods give you low-level access to sync stuff, you need to rely on scripting and events to do something with the data. You may want to API that with something like a .fetch() and .save() pair, but it hardly is necessary to add such specificity.

In order to overload these into the sync method but have method-specific events, we will create a little helper method first that will add a one-off event and then self-remove:

javascript

_throwAwaySyncEvent: function(eventName, callback) {
    // a pseudo :once event for each sync that sets the model to response and can do more callbacks.

    // normally, methods that implement this will be the only ones to auto sync the model to server version.
    eventName = eventName || 'sync:' + this.getRequestId();

    var self = this,
        throwAway = {};

    throwAway[eventName] = function(responseObj) {
        if (responseObj && typeof responseObj == 'object') {
            self.set(responseObj);
            callback && callback.apply(self, responseObj);
        }

        // remove this one-off event.
        self.removeEvents(throwAway);
    };

    return this.addEvents(throwAway);
}.protect()

Now, the actual fetch.

javascript

fetch: function() {
    // perform a .read and then set returned object key/value pairs to model.
    this._throwAwaySyncEvent('sync:' + this.getRequestId(), function() {
        this.fireEvent('fetch');
        this.isNewModel = false;
    });
    this.read();

    return this;
}   

Keep in mind that this example may or may not suit your needs. For starters, you may have a model with the following object stored:

{
    foo: 'bar'
}

And the fetch may return:

{
    bar: 'foo'
}

Your model will just get a new key ‘bar’ with the value of ‘foo’ – it won’t suddenly get rewritten to the new value from the server. Any values that have changed will change on your model and fire a ‘onChange’ event as you would expect. Just keep it in mind – you may want to add an options argument to .fetch that can help you control things in finer detail, though you’d have to leave a provision for saving private keys like id or anything else that probably should not change midway by the server.

It can be difficult to spec out a .save() method but we are going to give it a go. Basically, we want to be able to save the model as is, or pass on a key/value pair or an object to the save method, set it on the model and then save. An alternative spec would be to be able to just save to server without saving to the model first if arguments are passed – but we can already do this by calling model.update({some: 'data'});

javascript

save: function(key, value) {
    // saves model or accepts a key/value pair/object, sets to model and then saves.
    var method = ['update','create'][+this.isNew()];

    if (key) {
        // if key is an object, go to overloadSetter.
        var ktype = typeOf(key),
            canSet = ktype == 'object' || (ktype == 'string' && typeof value != 'undefined');

        canSet && this._set.apply(this, arguments);
    }

    // we want to set this.
    this._throwAwaySyncEvent('sync:' + this.getRequestId(), function() {
        this.fireEvent('save');
        this.fireEvent(method);
    });


    // create first time we sync, update after.
    this[method]();
    this.isNewModel = false;

    return this;
}

And the helper method that just returns how new we think the model is:
javascript

isNew: function() {
    if (typeof this.isNewModel === 'undefined')
        this.isNewModel = true;

    return this.isNewModel;
}

We can now use this in 1 of 3 ways.
javascript

modelInstance.save(); // save the current model, will fire 'create', then 'update'
modelInstance.save('hello', 'there'); // sets `hello: there` into the model, then saves
modelInstance.save({
    'hello': 'again',
    'foo': 'bar'
}); // saves hello: again, foo: bar into the model, then saves

modelInstance.get('hello'); // again
// as opposed to a simple object literal passed through:
modelInstance.update({'hi': 'there'});
// this kind update is partial - it won't export the rest of the model. .update() does.
modelInstance.get('hi'); // null

So, what does the fabled sync method look like in the end?
javascript

sync: function(method, model, options) {
    // internal low level api that works with the model request instance.
    options = options || {};
    
    // determine what to call or do a read by default.
    method = method && Model.methodMap[method] 
        ? Model.methodMap[method] 
        : Model.methodMap['read'];
    
    options.method = method;

    // if it's a method via POST, append passed object or use exported model
    if (method == Model.methodMap.create || method == Model.methodMap.update)
        options.data = model || this.toJSON();

    // make sure we have the right URL
    options.url = this.get('urlRoot') + this.get('id') + '/';

    // pass it all to the request
    this.request.setOptions(options);
    
    // call the request class' corresponding method (mootools does that for free!)
    this.request[method](model);
}

Nothing too fancy is required. We want to determine what the method is first, based upon our map. We care if its a method that POSTs data and if so, we export the serialised model or the passed model argument. This allows you to be more flexible and be able to update parts of the model on the fly, eg. Model.update({name:'dimitar'}) – even if that name differs from the one in your model, it will just dispatch it to the server.

We also compose a URL and try to append a model id to it.

That’s about all you need to get going, really. How would you use it?

javascript

// define a new prototype for our model. 
// You can just make an instance of Model.sync directly but this is cleaner
var userModel = new Class({

    Extends: Epitome.Model.Sync,

    options: {
        defaults: {
            urlRoot: '/account/'
        }
    }
});

var user = new userModel({
    id: '25'
}, {
    onChange: function() {
        some.viewRenderer(this.toJSON);
    },
    onSync: function() {
        console.log('hi');
    }
});

// read the data periodically
user.fetch.periodical(3000, user);

And thus, we have the ability to keep our models in sync with the server – or at the very least, have a way of talking to some endpoints and hope for the best. The pattern and API is not perfect but it allows you to have a different medium for syncing, like storage – all you really want is to keep the main method names and arguments sane and interchangeable.

What does it all look like in the end? Well, you can look at the source code or play with it on this jsfiddle. We use jsfiddle and not QMetric’s own Chiel’s tinker.io as we need a request echo service. In order for it to work, we set the model urlRoot to /echo/ and the model id to json, which combines to return /echo/json/ and comply with the format that jsfiddle requires.

Feel free to fork it, play with it, extend it or whatever. Any issues or questions, either post as a comment here, open an issue on github or catch me on IRC – irc://irc.freenode.net#mootools, nick coda. Next installments: Syncing to local and sessionStorage and then we do a quick ‘how to build your first controller’.

This is a demo of the very useful BusterJS javascript testing framework that runs under node. It supports multiple ways of testing – including static, on-demand and autotesting.

This is a video that highlights how it can listen for changes in your project and run the test suite automatically.

To install buster, do npm install -g buster

To run a buster capture server (where you can attach any device to the capture interface), run buster server and then point your browser to port 1111 on the host that runs it.

Finally, run either buster test or buster autotest to start testing. Couldn’t be easier but it makes such a difference to javascript development.

If you want to play with the code in the video and inspect the actual tests, you can checkout our mailcheck plugin from my repo here:
git checkout git://github.com/DimitarChristoff/mailcheck.git

Installing RabbitMQ server on a 64-bit Amazon AWS Linux AMI using Puppet was notoriously difficult to achieve, so after finally finding a stable solution I’ve published this over at GitHub hoping this might save some considerable anguish!

https://github.com/aeells/puppet-rabbitmq-ec2-linux

According to the guys at RabbitMQ the install on the 64-bit EC2 Linux AMI is not a lot of fun, largely because of the Erlang dependency.

The stability of some of the RPM providers / mirrors we were using at one point was also quite questionable. The final solution does still rely on some external providers, namely rabbitmq.com and binaries.erlang-solutions.com, but we’ve never had an issue with the stability of these and could arguably host the RPM ourselves anyway.

Just in today, Chromium dev team announced a huge improvement to webkit, including some 240% boost to innerHTML performance and much more.

DOM performance boosts summary

Read full story

This is a MooTools JavaScript library tutorial
Difficulty: moderate
Skill: intermediate+
Requires: Prior knowledge of javascript, MooTools and MooTools Classes
UPDATE:This tutorial is now hosted in its own GitHub repository called Epitome. You can view the Groc documentation of the class on the gh-pages branch here.

The boom in client-side MVC libraries like Backbone.js, Ember (Sprout), Knockout.js and so forth has been nothing short of amazing. It provides a great pattern of data-driven rendering and events without having any extra logic to cater for display.

I guess it makes sense for users of libraries such as jQuery (or Zepto) to rely on a 3-rd party MVC (or MVVM) framework but if you’re a MooTools user, the return is somewhat diminished. The nature of event-based programming in MooTools via the DOMEvent and Class.Event APIs has always granted extra flexibility and is in no small part instrumental in our decision to use MooTools in the first place.

Having said that, the idea of a Class (or Object) that reacts to data model changes is not without an appeal. Let’s just create some basic specs for a Data Model in a possible MVC pattern:

  • custom setter that changes any property on the data model
  • custom getter that can return a property from the data model
  • access to the data model directly w/o the API
  • ability to set/get multiple properties at the same time
  • ability to fire a single change event for when all properties have been updated
  • ability to fire a change:propertyname event for every property change but not set
  • ability to export the whole data model

These are also nice to have features that most frameworks will support but they are not essential to a core Model experiment:

  • serialisation of Models (IDs)
  • ability to distinguish between non-primitive values like objects that look the same but are not the same object: should NOT raise a change event
  • sync func: ability to load, save and delete the data model RESTfully (ish).

Best way to keep track of what we are tying to achieve is to write the specs into a Test Case. We are going to use Buster.js and here is what we have come up with for the Model: tests/epitome-model-test.js. As we go along, we will slow start making these tests pass. You can checkout some of the branches in the repository to see how it shapes up at each stage:

git checkout [branchname], where branchname is one of the following (in this order): skeleton, attributes, overload, events, accessors, is-equal and finally, back to master. Of course, you can just do git fetch;git br -lr; and see what’s cooking…

So, without further ado, let’s get started with the core of the Class, i.e.: the closure and storage for the data object:

javascript

!function() {
    
    var Model = this.Model = new Class({

        Implements: [Options, Events],

        _attributes: {}, // initial `private` object

        options: {
            defaults: {
            }
        },

        initialize: function(obj, options) {
            // constructor for Model class.

            // are there any defaults passed? better to have them on the proto.
            options && options.defaults && (this.options.defaults = Object.merge(this.options.defaults, options.defaults));

            // initial obj should pass on a setter (this will fail for now).
            obj && typeOf(obj) === 'object' && this.set(Object.merge(this.options.defaults, obj));

            // merge options overload
            this.setOptions(options);
        }
    }) // end class
}()

So far, so good. We have our skeleton. We have spec’d out the need to have a simple set method. In order to implement the set, we need to think about where to place the actual data model. It can be a truly private object inside the closure, but this means it cannot be re-used (occluded) between different model instances. So we are going to make it a property on the class. This is more-or-less standard practice, although it does mean external interference can directly modify model values without firing events. With that in mind… we will have our data model in the this._attributes object.

There is nothing to prevent access to the attributes object by means of referencing instance._attributes.property.

Despite of the lack of privacy for our data model, we are going to create a function as the basic setter API method, a simple key => value dispatcher with some naive logic:

javascript

set: function(key, value) {
    // needs to be bound the the instance.
    if (!key || typeof value === undefined)
        return this;

    // no change? this is crude and works for primitives.
    if (this._attributes[key] && this._attributes[key] === value)
        return this;

    if (value === null) {
        delete this._attributes[key]; // delete = null.
    }
    else {
        this._attributes[key] = value;
    }

    // fire an event.
    this.fireEvent('change:' + key, value);

    // store changed keys...
    this.propertiesChanged.push(key);

    return this;
}

This is fine – it will deal with being able to set a single property. We can now add a test of our class already – which you can see on this tinker.io (modified so it runs with what we have so far).

But, we also need to look at our specs. The above will work for a single property change but if we need to call each property manually, it becomes rather inconvenient. Instead, we will refactor a little and create a new dummy set function, moving the function above into _set and decorating it with .overloadSetter – which is a MooTools API that allows overloading a single key -> value pair to a full object. It now starts to look like this:

javascript

set: function() {
    // call the real getter. we proxy this because we want
    // a single event after all properties are updated and the ability to work with
    // either a single key, value pair or an object
    this.propertiesChanged = [];
    this._set.apply(this, arguments);
    this.propertiesChanged.length && this.fireEvent('change', [this.propertiesChanged]);
},

// private, real setter functions, not on prototype, see note above
_set: function(key, value) {
    // needs to be bound the the instance.
    if (!key || typeof value === undefined)
        return this;

    // custom setter - see bit further down
    if (this.properties[key] && this.properties[key]['set']) {
        return this.properties[key]['set'].call(this, value);
    }
    
    // no change? this is crude and works for primitives.
    if (this._attributes[key] && this._attributes[key] === value)
        return this;

    if (value === null) {
        delete this._attributes[key]; // delete = null.
    }
    else {
        this._attributes[key] = value;
    }

    // fire an event.
    this.fireEvent('change:' + key, value);

    // store changed keys...
    this.propertiesChanged.push(key);

    return this;
}.overloadSetter(), // mootools abstracts overloading to allow object iteration

What we have written so far deals with a lot of our specs, namely: private setter, event per every property, event after all properties and even a change event only fires when a change takes place, not on every set. We have an event API that works and we also have a propertiesChanged array, which lets us pass on all actually changed properties to the unified event handler. We also have created a pseudo delete func by passing null as property value. So far, so good. We have also allowed for custom setters – a bit more on that later.

Now, we should provide access to the data via an API – let’s create our first getter:

javascript

get: function(key) {
    // overload getter, 2 paths...

    // custom accessors take precedence and have no reliance on item being in attributes
    if (key && this.properties[key] && this.properties[key]['get']) {
        return this.properties[key]['get'].call(this);
    }

    // else, return from attributes or return null when undefined.
    return (key && typeof this._attributes[key] !== 'undefined') 
        ? this._attributes[key] 
        : null;
}.overloadGetter()

Just like before, we pass it through the MooTools .overloadGetter API, which is the counterpart of overloadSetter.

This allows us to get multiple properties, eg, model.get(['id', 'name', 'surname']) will return a single object with only these properties (if available).

One thing that MooTools tends to support is the ability to define custom `accessors` (getters and setters) that override your defaults. So… we are going to have to revisit what we wrote as this can provide a very nice tool that safeguards against direct use of certain properties. Eg, you may want to internally parse a model.set("date", "20-12-77") to a more meaningful value – or when you have model.get("dateObject"), you may want that to be an actual Date object.

This is a MooToolish practice, similar in API to the custom element accessors – eg, the accessors for Element.get("tween") and Element.set("tween") look like this: https://github.com/mootools/mootools-core/blob/master/Source/Fx/Fx.Tween.js#L45-61. We do the same thing in our Model:

javascript

// we add this object of overrides to our model class first.
properties: {},

// now, change the getter and add support for that
get: function(key) {
    // and the overload getter
    return (key && typeof this._attributes[key] !== undefined)
        ? this.properties[key] && this.properties[key]['get'] ? this.properties[key]['get'].call(this) : this._attributes[key]
        : null;
}.overloadGetter()

What we do here is we call any functions bound in the model.properties.property object and let them deal with how the data is set or retrieved. You can define either a get or a set override or both. In reality, it could look like this when you want to modify an existing Model:

javascript

var foo = new Model({
    date: "2001-11-30"
});

foo.properties.date.get = function() {
    return Date(this._attributes['date']));
};

foo.get("date"); // -> Date object, not string. 

However, the above practice is not very semantic, you are changing the behavior of a Model instance and not your Model prototype. In reality, you want to abstract your custom Models by extending your base Model class.

In any case, things are starting to shape up. Here is what our Model Class is starting to look like: https://tinker.io/e2f30.

We are now going to create a custom version of a Model that can deal with Users with a special accessor for for say, fullName:

javascript

// create a new user model by extending Model
var User = new Class({
    
    Extends: Model,

    // define a custom accessor
    properties: {
        fullName: {
            get: function() {
                return Object.values(this.get(["name", "surname"])).join(" ");
            },
            set: function(value) {
                var parts = value.split(" ");
                // notice we call _set directly or the change event won't fire correctly
                this._set({
                    "name": parts[0],
                    "surname": parts[1]
                });
            }
        }
    }
});
                
// instantiate the new User Model                
var myUser = new User({
    name: "Bob",
    surname: "Robertson"
});   
                
console.log(myUser.get("fullName")); // Bob Robertson      
myUser.set("fullName", "Bob Awesome");
console.log(myUser.get("surname")); // awesome        

This way, the actual data Model in _attributes will never have the value fullName but it will work just as if it does. See the magic happen on this updated tinker

Our core Model is nearly complete. There is a pseudo ‘standard’ in MVC frameworks out there to add a toJSON method that returns the current data model. Please note that despite of what the name suggests, toJSON returns an object and not a JSON string. This may seem a redundant gesture as the instance has direct access to the ._attributes object that contains all our data. However, by providing an API you can extend it later and filter what actually gets exported. Last but not least, ._attributes is an Object and we need to dereference it from the Model when we export it, otherwise changes by reference can affect our Model data without our knowledge.

javascript

toJSON: function() {
    return Object.clone(this._attributes);
}

View the complete Model class here: https://tinker.io/beceb

It is important to know that the Model won’t fire a change event for every set. Eg.:
javascript

myUser.set("dateOfBirth", "31/07/1975"); // fires change:dateOfBirth and change: fn(['dateOfBirth'])
myUser.set("dateOfBirth", "31/07/1975"); // does NOT fire anything.

// because we just do a simple compare and comparing 2 objects of the same structure returns false, this will fire always:
myUser.set("dateOfBirth", new Date(1975, 6, 31)); // fires change:dateOfBirth and change: fn(['dateOfBirth'])
myUser.set("dateOfBirth", new Date(1975, 6, 31)); // fires change:dateOfBirth again even though they are the same.

We will look at fixing the change events so they don’t fire for similar looking object types in our next installment.

This concludes part 1 of the Model tutorial. In part 2, we will try to make it more versatile and have a go at sync funcs. Part 3 will cover how to create a collection of Models and part 4 will deal with the View (rendering).