Planet JavaScript

Updated Tuesday, 06 January 2015 17:01
ECMAScript 6: maps and sets

Among others, the following four data structures are new in ECMAScript 6: Map, WeakMap, Set and WeakSet. This blog post explains how they work.


JavaScript has always had a very spartan standard library. Sorely missing was a data struc

Among others, the following four data structures are new in ECMAScript 6: Map, WeakMap, Set and WeakSet. This blog post explains how they work.


JavaScript has always had a very spartan standard library. Sorely missing was a data structure for mapping values to values. The best you can get in ECMAScript 5 is a map from strings to arbitrary values, by abusing objects. Even then there are several pitfalls that can trip you up.

The Map data structure in ECMAScript 6 lets you use arbitrary values as keys and is highly welcome.

Basic operations

Working with single entries:

    > let map = new Map();
    > map.set('foo', 123);
    > map.get('foo')
    > map.has('foo')
    > map.delete('foo')
    > map.has('foo')

Determining the size of a map and clearing it:

    > let map = new Map();
    > map.set('foo', true);
    > map.set('bar', false);
    > map.size
    > map.clear();
    > map.size

Setting up a map

You can set up a map via an iterable over key-value “pairs” (arrays with 2 elements). One possibility is to use an array (which is iterable):

    let map = new Map([
        [ 1, 'one' ],
        [ 2, 'two' ],
        [ 3, 'three' ], // trailing comma is ignored

Alternatively, the set method is chainable:

    let map = new Map()
    .set(1, 'one')
    .set(2, 'two')
    .set(3, 'three');


Any value can be a key, even an object:

    let map = new Map();
    const KEY1 = {};
    map.set(KEY1, 'hello');
    console.log(map.get(KEY1)); // hello
    const KEY2 = {};
    map.set(KEY2, 'world');
    console.log(map.get(KEY2)); // world
What keys are considered equal?

Most map operations need to check whether a value is equal to one of the keys. They do so via the internal operation SameValueZero, which works like === [1], but considers NaN to be equal to itself.

Let’s first see how === handles NaN:

    > NaN === NaN

Conversely, you can use NaN as a key in maps, just like any other value:

    > let map = new Map();
    > map.set(NaN, 123);
    > map.get(NaN)

Like ===, -0 and +0 are considered the same value (which is the best way to handle the two zeros [3]).

    > map.set(-0, 123);
    > map.get(+0)

Different objects are always considered different. That is something that can’t be configured (yet), as explained later, in the FAQ.

    > new Map().set({}, 1).set({}, 2).size

Getting an unknown key produces undefined:

    > new Map().get('asfddfsasadf')


Let’s set up a map to demonstrate how one can iterate over it.

    let map = new Map([
        [false, 'no'],
        [true,  'yes'],

Maps record the order in which elements are inserted and honor that order when iterating over keys, values or entries.

Iterables for keys and values

keys() returns an iterable [4] over the keys in the map:

    for (let key of map.keys()) {
    // Output:
    // false
    // true

values() returns an iterable over the values in the map:

    for (let value of map.values()) {
    // Output:
    // no
    // yes
Iterables for entries

entries() returns the entries of the map as an iterable over [key,value] pairs (arrays).

    for (let entry of map.entries()) {
        console.log(entry[0], entry[1]);
    // Output:
    // false no
    // true yes

Destructuring enables you to access the keys and values directly:

    for (let [key, value] of map.entries()) {
        console.log(key, value);

The default way of iterating over a map is entries():

    > map[Symbol.iterator] === map.entries

Thus, you can make the previous code snippet even shorter:

    for (let [key, value] of map) {
        console.log(key, value);
Spreading iterables

The spread operator (...) turns an iterable into the arguments of a function or parameter call. For example, Math.max() accepts a variable amount of parameters. With the spread operator, you can apply that method to iterables.

    > let arr = [2, 11, -1];
    > Math.max(...arr)

Spread also turns an iterable into the elements of an array. That lets us convert the result of Map.prototype.keys() (an iterable) into an array:

    let map = new Map([
        [1, 'one'],
        [2, 'two'],
        [3, 'three'],
    let arr = []; // [1, 2, 3]

Looping over entries

The Map method forEach has the following signature:

    Map.prototype.forEach((value, key, map) => void, thisArg?) : void

The signature of the first parameter mirrors the signature of the callback of Array.prototype.forEach, which is why the value comes first.

    let map = new Map([
        [false, 'no'],
        [true,  'yes'],
    map.forEach((value, key) => {
        console.log(key, value);
    // Output:
    // false no
    // true yes

Mapping and filtering

You can map() and filter() arrays, but there are no such operations for maps. The solution:

  1. Convert the map into an array of [key,value] pairs.
  2. Map or filter the array.
  3. Convert the result back to a map.

That’s what happens in the following example:

    let map0 = new Map()
    .set(1, 'a')
    .set(2, 'b')
    .set(3, 'c');
    let map1 = new Map(
        [...map0] // step 1
        .filter(([k, v]) => k < 3) // step 2
    ); // step 3
    // Resulting map: {1 => 'a', 2 => 'b'}
    let map2 = new Map(
        [...map0] // step 1
        .map(([k, v]) => [k * 2, '_' + v]) // step 2
    ); // step 3
    // Resulting map: {2 => '_a', 4 => '_b', 6 => '_c'}

Step 1 is performed by the spread operator (...) which I have explained previously.


Handling single entries:

  • Map.prototype.get(key) : any
    Returns the value that key is mapped to in this map. If there is no key key in this map, undefined is returned.

  • Map.prototype.set(key, value) : this
    Maps the given key to the given value. If there is already an entry whose key is key, it is updated. Otherwise, a new entry is created.

  • Map.prototype.has(key) : boolean
    Returns whether the given key exists in this map.

  • Map.prototype.delete(key) : boolean
    If there is an entry whose key is key, it is removed and true is returned. Otherwise, nothing happens and false is returned.

Handling all entries:

  • get Map.prototype.size : number
    Returns how many entries there are in this map.

  • Map.prototype.clear() : void
    Removes all entries from this map.

Iterating and looping: happens in the order in which entries were added to a map.

  • Map.prototype.entries() : Iterable<[any,any]>
    Returns an iterable with one [key,value] pair for each entry in this map. The pairs are arrays of length 2.

  • Map.prototype.forEach((value, key, collection) => void, thisArg?) : void
    The first parameter is a callback that is invoked once for each entry in this map. If thisArg is provided, this is set to it for each invocation. Otherwise, this is set to undefined.

  • Map.prototype.keys() : Iterable<any>
    Returns an iterable over all keys in this map.

  • Map.prototype.values() : Iterable<any>
    Returns an iterable over all values in this map.

  • Map.prototype[Symbol.iterator]() : Iterable<[any,any]>
    The default way of iterating over maps. Refers to Map.prototype.entries.


A WeakMap is a map that doesn’t prevent its keys from being garbage-collected. That means that you can associate data with objects without having to worry about memory leaks.

A WeakMap is a data structure whose keys must be objects and whose values can be arbitrary values. It has the same API as Map, with one significant difference: you can’t iterate over the contents – neither the keys, nor the values, nor the entries. You can’t clear a WeakMap, either.

The rationales for these restrictions are:

  • The volatility of WeakMaps makes iteration difficult.

  • Not having clear() provides a security property. Quoting Mark Miller: “The mapping from weakmap/key pair value can only be observed or affected by someone who has both the weakmap and the key. With clear(), someone with only the WeakMap would’ve been able to affect the WeakMap-and-key-to-value mapping.”

Using WeakMaps for private data

The following code uses the WeakMaps _counter and _action to store private data.

    let _counter = new WeakMap();
    let _action = new WeakMap();
    class Countdown {
        constructor(counter, action) {
            _counter.set(this, counter);
            _action.set(this, action);
        dec() {
            let counter = _counter.get(this);
            if (counter < 1) return;
            _counter.set(this, counter);
            if (counter === 0) {

Let’s use Countdown:

    > let c = new Countdown(2, () => console.log('DONE'));
    > c.dec();
    > c.dec();

Because Countdown keeps instance-specific data elsewhere, its instance c has no own property keys:

    > Reflect.ownKeys(c)

WeakMap API

WeakMaps have only four methods, all of them work the same as the Map methods.

  • WeakMap.prototype.get(key) : any
  • WeakMap.prototype.set(key, value) : this
  • WeakMap.prototype.has(key) : boolean
  • WeakMap.prototype.delete(key) : boolean


ECMAScript 5 doesn’t have a set data structure, either. There are two possible work-arounds:

  • Use the keys of an object to store the elements of a set of strings.
  • Store (arbitrary) set elements in an array: Check whether it contains an element via indexOf(), remove elements via filter(), etc. This is not a very fast solution, but it’s easy to implement. One issue to be aware of is that indexOf() can’t find the value NaN.

ECMAScript 6 has the data structure Set which works for arbitrary values, is fast and handles NaN correctly.

Basic operations

Managing single elements:

    > let set = new Set();
    > set.add('red')
    > set.has('red')
    > set.delete('red')
    > set.has('red')

Determining the size of a set and clearing it:

    > let set = new Set();
    > set.add('red')
    > set.add('green')
    > set.size
    > set.clear();
    > set.size

Setting up a set

You can set up a set via an iterable over the elements that make up the set. For example, via an array:

    let set = new Set(['red', 'green', 'blue']);

Alternatively, the add method is chainable:

    let set = new Set().add('red').add('green').add('blue');


Like maps, elements are compared similarly to ===, with the exception of NaN being like any other value.

    > let set = new Set([NaN]);
    > set.size
    > set.has(NaN)

Adding an element a second time has no effect:

    > let set = new Set();
    > set.add('foo');
    > set.size
    > set.add('foo');
    > set.size

Similarly to ===, two different objects are never considered equal (which can’t currently be customized, as explained in the FAQ, later):

    > let set = new Set();
    > set.add({});
    > set.size
    > set.add({});
    > set.size


Sets are iterable and the for-of loop works as you’d expect:

    let set = new Set(['red', 'green', 'blue']);
    for (let x of set) {
    // Output:
    // red
    // green
    // blue

As you can see, sets preserve iteration order. That is, elements are always iterated over in the order in which they were inserted.

The previously explained spread operator (...) works with iterables and thus lets you convert a set to an array:

    let set = new Set(['red', 'green', 'blue']);
    let arr = [...set]; // ['red', 'green', 'blue']

We now have a concise way to convert an array to a set and back, which has the effect of eliminating duplicates from the array:

    let arr = [3, 5, 2, 2, 5, 5];
    let unique = [ Set(arr)]; // [3, 5, 2]

Mapping and filtering

In contrast to arrays, sets don’t have the methods map() and filter(). A work-around is to convert them to arrays and back.


    let set = new Set([1, 2, 3]);
    set = new Set([...set].map(x => x * 2));
    // Resulting set: {2, 4, 6}


    let set = new Set([1, 2, 3, 4, 5]);
    set = new Set([...set].filter(x => (x % 2) == 0));
    // Resulting set: {2, 4}


Single set elements:

  • Set.prototype.add(value) : this
    Adds value to this set.

  • Set.prototype.has(value) : boolean
    Checks whether value is in this set.

  • Set.prototype.delete(value) : boolean
    Removes value from this set.

All set elements:

  • get Set.prototype.size : number
    Returns how many elements there are in this set.

  • Set.prototype.clear() : void
    Removes all elements from this set.

Iterating and looping:

  • Set.prototype.values() : Iterable<any>
    Returns an iterable over all elements of this set.

  • Set.prototype[Symbol.iterator]() : Iterable<any>
    The default way of iterating over sets. Points to Set.prototype.values.

  • Set.prototype.forEach((value, key, collection) => void, thisArg?)
    Loops over the elements of this set and invokes the callback (first parameter) for each one. value and key are both set to the element, so that this method works similarly to Map.prototype.forEach. If thisArg is provided, this is set to it for each call. Otherwise, this is set to undefined.

Symmetry with Map: The following two methods only exist so that the interface of sets is similar to the interface of maps. Each set element is handled as if it were a map entry whose key and value are the element.

  • Set.prototype.entries() : Iterable<[any,any]>
  • Set.prototype.keys() : Iterable<any>


A WeakSet is a set that doesn’t prevent its elements from being garbage-collected. Consult the section on WeakMap for an explanation of why WeakSets don’t allow iteration, looping and clearing.

Given that you can’t iterate over their elements, there are not that many use cases for WeakSets. They enable you to mark objects, to associate them with boolean values.


WeakSets have only three methods, all of them work the same as the Set methods.

  • WeakSet.prototype.add(value)
  • WeakSet.prototype.has(value)
  • WeakSet.prototype.delete(value)


Why size and not length?

Question: Arrays have the property length to count the number of entries. Why do maps and set have a different property, size, for this purpose?

Answer: length is for sequences, data structures that are indexable – like arrays. size is for collections that are primarily unordered – like maps and sets.

Why can’t I configure how maps and sets compare keys and values?

Question: It would be nice if there were a way to configure what map keys and what set elements are considered equal. Why isn’t there?

Answer: That feature has been postponed, as it is difficult to implement properly and efficiently. One option is to hand callbacks to collections that specify equality.

Another option, available in Java, is to specify equality via a method that object implement (equals() in Java). However, this approach is problematic for mutable objects: In general, if an object changes, its “location” inside a collection has to change, as well. But that’s not what happens in Java. JavaScript will probably go the safer route of only enabling comparison by value for special immutable objects (so-called value objects). Comparison by value means that two values are considered equal if their contents are equal. Primitive values are compared by value in JavaScript.


  1. Equality Operators: === Versus ==” in “Speaking JavaScript”
  2. NaN” in “Speaking JavaScript”
  3. Two Zeros” in “Speaking JavaScript”
  4. Iterators and generators in ECMAScript 6
Introduction to HTML Imports

Template, Shadow DOM, and Custom Elements enable you to build UI components easier than before. But it's not efficient to load each resources such as HTML, CSS and JavaScript separately.

Deduping dependencies isn't easy either. To load a library like jQuery UI or Bootstrap today req

Template, Shadow DOM, and Custom Elements enable you to build UI components easier than before. But it's not efficient to load each resources such as HTML, CSS and JavaScript separately.

Deduping dependencies isn't easy either. To load a library like jQuery UI or Bootstrap today requires using separate tags for JavaScript, CSS, and Web Fonts. Things get even more complex if you deal with Web Components with multiple dependencies.

HTML Imports allow you to load those resources as an aggregated HTML file.

Using HTML Imports

In order to load an HTML file, add a link tag with an import in the rel attribute and an href that contains a path to the HTML file. For example, if you want to load an HTML file called component.html into index.html:


<link rel="import" href="component.html" >

You can load any resource including scripts, stylesheets, and web fonts, into the imported HTML just like you do to regular HTML files:


<link rel="stylesheet" href="css/style.css">
<script src="js/script.js"></script>

doctype, html, head, body aren't required. HTML Imports will immediately load the imported document, resolve subresources and execute JavaScript, if any.

Execution order

Browsers parse the content of HTML in linear order. This means script tags at the top of HTML will be executed earlier than the ones at the bottom. Also, note that browsers usually wait for any JavaScript code to finish executing before parsing the following lines of HTML.

In order to avoid script tag to block rendering of HTML, you can use async / defer attributes (or you can move all of your script tags to the bottom of the page). defer attribute postpones execution of the script until entire HTML is parsed. async attribute lets the browser asynchronously execute the script so it won't block rendering HTML.

Then, how do HTML Imports work?

Script inside an html import behave just like a script tag with a defer attribute. In the example code below, index.html will execute script1.js and script2.js inside component.html before executing script3.js.


<link rel="import" href="component.html"> // 1.
<title>Import Example</title>
<script src="script3.js"></script>        // 4.


<script src="js/script1.js"></script>     // 2.
<script src="js/script2.js"></script>     // 3.
  1. Loads component.html from index.html and wait for execution
  2. Execute script1.js in component.html
  3. Execute script2.js in component.html after execution of script1.js
  4. Execute script3.js in index.html after execution of script2.js

Note that by adding an async attribute to link[rel="import"], HTML Import behaves just like async attribute to script tag. It won't wait for the execution and load of imported HTML which also means it doesn't block rendering the original HTML. This can potentially improve performance of your website unless other scripts depends on the execution of the imported HTML.

Going beyond origins

HTML Imports basically can't import resources from other origins. For example, you can't import an HTML file at from

To avoid this restriction, use CORS (Cross Origin Resource Sharing). To learn about CORS, read this article.

window and document object in an imported HTML

Earlier, I mentioned JavaScript will be executed when an HTML file is imported. But this doesn't mean the markup in the imported HTML file will also be rendered inside the browser. You need to write some JavaScript to help here.

One caveat to using JavaScript with HTML Imports is that the document object in an imported HTML file actually points to the one in the original page.

Taking the previous code as an example, the document in index.html and component.html both refers to the document object in index.html.

So, how can you refer to the document object of the imported HTML file?

In order to obtain component.html's document object from within the index.html page, refer to the link element's import property.


var link = document.querySelector('link[rel="import"]');
link.addEventListener('load', function(e) {
  var importedDoc = link.import;
  // importedDoc points to the document under component.html

To obtain the document object from within component.html itself, refer to document.currentScript.ownerDocument.


var mainDoc = document.currentScript.ownerDocument;
// mainDoc points to the document under component.html

If you are using webcomponents.js, use document._currentScript instead of document.currentScript. The underscore is used to polyfill the currentScript property which is not available in all browsers.


var mainDoc = document._currentScript.ownerDocument;
// mainDoc points to the document under component.html

By writing the following code at the beginning of your script, you can easily access component.html's document object regardless of if the browser supports HTML Imports or not.

document._currentScript = document._currentScript || document.currentScript;

Performance consideration

One of the benefits of using HTML Imports is to be able to organize resources. But this also means more overhead when loading those resources because of additional HTML file. There are couple of points to consider:

Resolving dependencies

What if multiple imported documents all depend on, and try to load the same library? For example:

Say you are loading jQuery in two imported HTML files. If each import contains a script tag to load jQuery, it will be loaded and executed twice.


<link rel="import" href="component1.html">
<link rel="import" href="component2.html">


<script src="js/jquery.js"></script>


<script src="js/jquery.js"></script>

This is a problem imports solve for free.

Unlike script tags, HTML Imports skip loading and executing HTML files that are previously loaded. Taking the previous code as an example, by wrapping the script tag that loads jQuery with an HTML Import, it will be loaded and executed only once.

Dependency resolution

But here's another problem: we have added one more file to load. What can we do with this bloating number of files?

Luckily, we have a tool called "vulcanize" for the solution.

Aggregating network requests

Vulcanize is a tool to aggregate multiple HTML files into one, in order to reduce the number of network connections. You can install it via npm, and use it from the command line. There are grunt and gulp tasks as well so you can make vulcanize part of your build process.

To resolve dependencies and aggregate files in index.html:

$ vulcanize -o vulcanized.html index.html

By executing this command, dependencies in index.html will be resolved and will generate an aggregated HTML file called vulcanized.html.

Learn more about vulcanize here.

Note: http2's server push abilities are considered to eliminate needs for concatenating and vulcanizing files in the future.

Combining HTML Imports with Template, Shadow DOM and Custom Elements

Let's utilize HTML Imports with the code we've been working through this article series.

In case you haven't read the previous articles: With templates, defining the content of your custom element can be declarative. With Shadow DOM, styles, IDs and classes of an element can be scoped to itself. With Custom Elements, you can define your own custom HTML tags.

By combining these with HTML Imports, your custom web component will gain modularity and reusability. Anyone will be able to use it just by adding a link tag.


<template id="template">
  <div id="container">
    <img class="webcomponents" src="">
    <content select="h1"></content>
  // This element will be registered to index.html
  // Because `document` here means the one in index.html
  var XComponent = document.registerElement('x-component', {
    prototype: Object.create(HTMLElement.prototype, {
      createdCallback: {
        value: function() {
          var root = this.createShadowRoot();
          var template = document.querySelector('#template');
          var clone = document.importNode(template.content, true);


  <link rel="import" href="x-component.html">
    <h1>This is Custom Element</h1>

Notice that because the document object in x-component.html is the same one in index.html, you don't have to write anything tricky. It registers itself for you.

Supported browsers

HTML Imports are supported by Chrome and Opera. Firefox supports it behind a flag as of December 2014 (Update: Mozilla has said they are not currently planning to ship Imports, citing the need to first see how ES6 modules play out).

To check availability, go to or For polyfilling other browsers, you can use webcomponents.js (renamed from platform.js).


So that's the HTML Imports. If you are interested in learning more about the HTML Imports, head over to:

Databound, Typist

If you use Ruby on Rails, then you might like this Rails REST library wrapped: Databound (GitHub: Nedomas/databound, License: MIT, npm: databound, Bower: databound) by Domas Bitvinskas. The API looks a bit like the Rails syntax for database models:



If you use Ruby on Rails, then you might like this Rails REST library wrapped: Databound (GitHub: Nedomas/databound, License: MIT, npm: databound, Bower: databound) by Domas Bitvinskas. The API looks a bit like the Rails syntax for database models:

User = new Databound('/users')

User.where({ name: 'John' }).then(function(users) {
  alert('Users called John');

User.find(15).then(function(user) {
  print('User no. 15: ' +;

User.create({ name: 'Peter' }).then(function(user) {
  print('I am ' + + ' from database');

Install it with npm, Bower, or as part of a Rails asset pipeline. The author also notes that you can use it with Angular as an alternative to ngResource.


Typist (GitHub: positionly/Typist, License: MIT, Bower: Typist) by Oskar Krawczyk is a small library for animating text as if it’s being typed. It can work with responsive layouts, and the author claims it has improved click-through-rates on a commercial homepage.

It doesn’t have any dependencies, and is invoked by a constructor that accepts options for the animation intervals. The required markup should specify the text to be typed in the data-typist and data-typist-suffix attributes.

Alex Young (DailyJS) @ London › England ( Feed )
Friday, 02 January 2015
Curl Converter, aja.js, sneakpeek
Curl Converter

Chrome’s “Copy as cURL” menu item is useful, but what if you want to duplicate the same request with Node? Curl Converter (GitHub: NickCarneiro/curlconverter, License: MIT, npm: curlconverter) by Nick Carneiro can convert between cURL syntax and Node’s

Curl Converter

Chrome’s “Copy as cURL” menu item is useful, but what if you want to duplicate the same request with Node? Curl Converter (GitHub: NickCarneiro/curlconverter, License: MIT, npm: curlconverter) by Nick Carneiro can convert between cURL syntax and Node’s popular request module. It also supports Python, and can be installed with npm.


aja.js (Bower: aja, npm: aja, License: MIT) by Bertrand Chevrier is an Ajax library that supports JSON and JSONP. It can be used to load large chunks of HTML or JSON, and can be installed with npm or Bower.

The API is fluent, so it can be used as a REST client like this:

  .data({ firstname: 'John Romuald' })
  .on('200', function(response) {})

It also supports some DOM manipulation:


It comes with tests that can be run with Grunt, and the readme has more examples for things like posting data.


If you’re looking for a library to hide the header when the page is scrolled, then sneakpeek (GitHub: antris/sneakpeek, License: MIT, npm: sneakpeek) is nice because it’s small, installable with npm, and has no external dependencies.

It’s a bit like headroom.js, but easier to use with Browserify.

Alex Young (DailyJS) @ London › England ( Feed )
Thursday, 01 January 2015
Node Roundup: nchat, hulken, cult

nchat (GitHub: irrationalistic/nchat, npm: nchat) by Chris Rolfs is a terminal-based chat application that uses WebSocket, which means it’s easier to use on networks where IRC might be blocked.

Notifications are supported on Mac OS X, and the client can run as the server


nchat (GitHub: irrationalistic/nchat, npm: nchat) by Chris Rolfs is a terminal-based chat application that uses WebSocket, which means it’s easier to use on networks where IRC might be blocked.

Notifications are supported on Mac OS X, and the client can run as the server so you only need to install nchat itself. It supports a few IRC-style commands, like /users, and you can deploy it to hosting providers like Heroku.


Hulken (GitHub: hulken, License: MIT, npm: hulken) by Johan Hellgren is a stress testing tool for HTTP services. It can make GET and POST requests, and can be configured to send dynamic payloads.

You can use hulken as a command-line tool, or a Node module. The documentation includes all of the supported options, and you’ll need to write an options.json file to use it on the command-line.


Cult (GitHub: typicode/cult, License: MIT, npm: cult) is a tool that monitors changes to a gulpfile and then reloads Gulp. You can run it on the command-line, and it uses the chalk library for pretty output. The readme has an example for supporting gulpfiles that are split across multiple files.

A new Microsoft browser?

Recently the news broke that Microsoft may be working on another browser instead of IE. After reviewing the available evidence I’ve come to the conclusion that, although Microsoft is making a few adjustments, and a name change for IE might be a good idea, the new browser will essentially be

Recently the news broke that Microsoft may be working on another browser instead of IE. After reviewing the available evidence I’ve come to the conclusion that, although Microsoft is making a few adjustments, and a name change for IE might be a good idea, the new browser will essentially be IE12. Still, I think we web developers should support the “new browser” narrative.

It seems the decision was taken to fork Trident, Microsoft’s rendering engine. One version will essentially be IE11 with all backward-compatible bells and whistles, while the other one will be IE12, although it may carry a different name and will sport a new interface and support extensions. (IE extensions, that is. Not Chrome or Firefox extensions.)

The idea seems to be that Windows 10 will ship both these browsers. The Internet icon on the desktop will start up IE12, while “if a page calls for IE to render in a compatibility mode” IE11 will be started up. I am assuming that what’s meant here is the meta versioning switch.

Remember that to this day IE11 also contains IE 10, 9, 8, 7, and 5.5, which are accessible through the once-maligned but now mostly-forgotten meta versioning switch, as well as, in the case of 5.5, the good old doctype switch.

The plan seems to be that the new IE12 will not carry all that cruft, but be a forward-looking modern browser. If you need legacy stuff you must start up another browser. Actually this is not such a bad idea. The versioning switch never really caught on on the public Internet (although corporate Intranets may be a different story), so why weigh IE down with a lot of other rendering engines that hardly anyone outside a corporate environment will ever need?

An implication of forking IE is that the new IE11 would be maintained separately from IE12. That might be interesting, although it’s also a lot of hassle for Microsoft. We’ll have to see if they’re really going to maintain two browsers.

Finally, IE may be changing names in the near future. Actually, that’s a pretty good idea. The brand “IE” has become synonymous with slow, old-fashioned, non-standard-compliant browsing — even though from IE10 on there was little reason for that judgement. But IE is being weighed down by the IE6 legacy, and a new name may be just what it needs. So let’s do it. (But not “Spartan,” please. It doesn’t make sense for a browser. Why not an explorer from the good old days? Maybe even a Dutch one?)

Internally, when talking to other web devs, you should treat the next Microsoft browser as IE12. Externally, however, when talking to clients and other non-techies, it could make sense to support the “Microsoft is creating a new browser” narrative. Who knows, your clients or other contacts may decide it’s time to say goodbye to their old IE versions and embrace the new browser. That would help them, us, and Microsoft at the same time.

JavaScript: 2014 in Review

I can’t keep up with the libraries that people send to DailyJS – there are just too many! I’ve always felt like this is a good thing: it’s a sign the community is creative and works hard to solve problems in interesting new ways.

It’s hard to decide on a

I can’t keep up with the libraries that people send to DailyJS – there are just too many! I’ve always felt like this is a good thing: it’s a sign the community is creative and works hard to solve problems in interesting new ways.

It’s hard to decide on a framework or library for a particular task: should you use Koa, Express, or Hapi on the server? Gulp or Grunt as a build system? Then there’s client-side development, with its rich set of libraries. This year alone I’ve used React, Angular, Knockout, and Backbone.

One of the reasons there are so many Node modules is npm is so good. There’s still room for improvement, and the npm blog has been useful for tracking new and upcoming changes to the package manager. It seems like more people than ever are using npm for client-side development as well, so it’ll be interesting to see if Bower still occupies its niche in 2015.

Speaking of 2015, I expect to see more people using ES6 features. We’ve already seen several libraries that use generators to make synchronous-style APIs for client-side modules, and server-side databases. Generators seem hard to learn so it’ll take a while for these APIs to catch on.

There’s still scepticism and even irritation in the Node community about ES6 modules. We’ve spent years writing CommonJS modules and happen to like the syntax, so ES6 modules are a hard pill to swallow. There’s a gist from 2013 about Node and ES6 modules that has comments from well-known Node programmers, and since then es6-module-loader by Guy Bedford has appeared. This library is a polyfill that provides System.import for loading ES6 modules. Guy wrote a great article, Practical Workflows for ES6 Modules with lots of details on ES6 modules from a Node programmer’s perspective.

I don’t think 2015 will see a big Node/ES6 module controversy, though. It seems like CommonJS modules are here to stay, and perhaps eventually we’ll start using both formats.

Another potential controversy is the future of Node forks. io.js got a of initial attention, but it seems to have cooled off over the last fortnight. But I think forks are positive and I’m excited to see what people do with alternative takes on Node.

If you do anything in 2015, please make more libraries and frameworks. We don’t want a totalitarian open source community, we want a big wonderful mess, because open source is an ongoing conversation with no truly right solutions.

Alex Young (DailyJS) @ London › England ( Feed )
Tuesday, 30 December 2014
React-Grid-Layout, Angular Debug Bar and Reading Position

Samuel Reed sent in React-Grid-Layout (GitHub: strml/react-grid-layout, License: MIT), a grid system that is responsive. It requires React but doesn’t require any other library (including jQuery).

You can use the ReactGridLayout custom element in


Samuel Reed sent in React-Grid-Layout (GitHub: strml/react-grid-layout, License: MIT), a grid system that is responsive. It requires React but doesn’t require any other library (including jQuery).

You can use the ReactGridLayout custom element in templates which allows you to cleanly specify how many rows and columns you’d like. It also supports props for columns, rows, responsive breakpoints, and layout change events.

Although the author states it has fewer features than Packery or Gridster, it supports some cool stuff like vertical auto-packing and dragging and resizing.

Angular Debug Bar and Reading Position Indicator

Maciej Rzepiński sent in two useful Angular projects:

angular-debug-bar allows you to including a new element, angular-debug-bar, to show some statistics about the current page. This includes a count of $watch and $listener items, DOM objects, and page load time. Each metric is defined with a registerPlugin method, so you might be able to add new metrics although I haven’t tried that myself.

angular-rpi is based on the Reading Position Indicator post from CSS-Tricks. It shows a bar at the top of the page as you scroll the document:


You can use it with the rpi directive. Both projects have a demo that you can run locally. If you want to edit the progress bar styles, then you can use the .scss file and run npm install ; bower install ; gulp.

Alex Young (DailyJS) @ London › England ( Feed )
Monday, 29 December 2014
ngKookies, preCode.js

ngKookies (GitHub: voronianski/ngKookies, License: MIT, npm: ngkookies) by Dmitri Voronianski is a replacement for the Angular $cookieStore provider. It’s a port of jquery-cookie that helps work around angular.js issue 950.

After loading it, you can set


ngKookies (GitHub: voronianski/ngKookies, License: MIT, npm: ngkookies) by Dmitri Voronianski is a replacement for the Angular $cookieStore provider. It’s a port of jquery-cookie that helps work around angular.js issue 950.

After loading it, you can set cookies with $kookies.set('name', 'value') and read them with $kookies.get. You can also delete cookies with $kookies.remove.

Each method accepts an options object that can include the path and expires arguments. You can also store JSON objects as cookies with $kookiesProvider.config.json = true.


Have you ever written a blog engine or CMS that has to display source code? If so you’ve probably run into the issue where human-readable HTML doesn’t work well with pre elements if initial indentation is included.

preCode.js (License: MIT) Leon Sorokin is a small script that finds <pre><code> blocks and strips the leading and proceeding whitespace, so syntax highlighters should be able to display code properly.

It’s written using the standard DOM API, so it shouldn’t need any dependencies. It’ll also fix whitespace in textarea as well.

Symbols in ECMAScript 6

Symbols are a new primitive type in ECMAScript 6 [1]. This blog post explains how they work.

A new primitive type

ECMAScript 6 introduces a new primitive type: symbols. They are tokens that serve as unique IDs. You create symbols via the factory function Symbol() (which is loo

Symbols are a new primitive type in ECMAScript 6 [1]. This blog post explains how they work.

A new primitive type

ECMAScript 6 introduces a new primitive type: symbols. They are tokens that serve as unique IDs. You create symbols via the factory function Symbol() (which is loosely similar to String returning strings if called as a function):

    let symbol1 = Symbol();

Symbol() has an optional string-valued parameter that lets you give the newly created symbol a description:

    > let symbol2 = Symbol('symbol2');
    > String(symbol2)

Every symbol returned by Symbol() is unique, every symbol has its own identity:

    > symbol1 === symbol2

You can see that symbols are primitive if you apply the typeof operator to one of them – it will return a new symbol-specific result:

    > typeof symbol1

Aside: Two quick ideas of mine. If a symbol has no description, JavaScript engines could use the name of the variable (or property) that a symbol is assigned to. Minifiers could also help, by turning the original name of a variable into a parameter for Symbol.

Symbols as property keys

Symbols can be used as property keys:

    const MY_KEY = Symbol();
    let obj = {};
    obj[MY_KEY] = 123;
    console.log(obj[MY_KEY]); // 123

Classes and object literals have a feature called computed property keys [2]: You can specify the key of a property via an expression, by putting it in square brackets. In the following object literal, we use a computed property key to make the value of MY_KEY the key of a property.

    const MY_KEY = Symbol();
    let obj = {
        [MY_KEY]: 123

A method definition can also have a computed key:

    const FOO = Symbol();
    let obj = {
        [FOO]() {
            return 'bar';
    console.log(obj[FOO]()); // bar

Enumerating own property keys

Given that there is now a new kind of value that can become the key of a property, the following terminology is used for ECMAScript 6:

  • Property keys are either strings or symbols.
  • Property names are strings.

Let’s examine the API for enumerating own property keys by first creating an object.

    let obj = {
        [Symbol('my_key')]: 1,
        enum: 2,
        nonEnum: 3
        'nonEnum', { enumerable: false });

Object.getOwnPropertyNames() ignores symbol-valued property keys:

    > Object.getOwnPropertyNames(obj)
    ['enum', 'nonEnum']

Object.getOwnPropertySymbols() ignores string-valued property keys:

    > Object.getOwnPropertySymbols(obj)

Reflect.ownKeys() considers all kinds of keys:

    > Reflect.ownKeys(obj)
    [Symbol(my_key), 'enum', 'nonEnum']

The name of Object.keys() doesn’t really work, anymore: it only considers enumerable property keys that are strings.

    > Object.keys(obj)

Using symbols to represent concepts

In ECMAScript 5, one often represents concepts (think enum constants) via strings. For example:

    var COLOR_RED    = 'RED';
    var COLOR_GREEN  = 'GREEN';
    var COLOR_BLUE   = 'BLUE';

However, strings are not as unique as we’d like them to be. To see why, let’s look at the following function.

    function getComplement(color) {
        switch (color) {
            case COLOR_RED:
                return COLOR_GREEN;
            case COLOR_ORANGE:
                return COLOR_BLUE;
            case COLOR_YELLOW:
                return COLOR_VIOLET;
            case COLOR_GREEN:
                return COLOR_RED;
            case COLOR_BLUE:
                return COLOR_ORANGE;
            case COLOR_VIOLET:
                return COLOR_YELLOW;
                throw new Exception('Unknown color: '+color);

It is noteworthy that you can use arbitrary expressions as switch cases, you are not limited in any way. For example:

    function isThree(x) {
        switch (x) {
            case 1 + 1 + 1:
                return true;
                return false;

We use the flexibility that switch offers us and refer to the colors via our constants (COLOR_RED etc.) instead of hard-coding them ('RED' etc.).

Interestingly, even though we do so, there can still be mix-ups. For example, someone may define a constant for a mood:

    var MOOD_BLUE = 'BLUE';

Now the value of BLUE is not unique anymore and MOOD_BLUE can be mistaken for it. If you use it as a parameter for getComplement(), it returns 'ORANGE' where it should throw an exception.

Let’s use symbols to fix this example. Now we can also use the ECMAScript 6 feature const, which lets us declare actual constants (you can’t change what value is bound to a constant, but the value itself may be mutable).

    const COLOR_RED    = Symbol();
    const COLOR_ORANGE = Symbol();
    const COLOR_YELLOW = Symbol();
    const COLOR_GREEN  = Symbol();
    const COLOR_BLUE   = Symbol();
    const COLOR_VIOLET = Symbol();

Each value returned by Symbol is unique, which is why no other value can be mistaken for BLUEnow. Intriguingly, the code of getComplement() doesn’t change at all if we use symbols instead of strings, which shows how similar they are.

Symbols as keys of properties

Being able to create properties whose keys never clash with other keys is useful in two situations:

  • If several parties contribute internal properties to the same object, via mixins.
  • To keep meta-level properties from clashing with base-level properties.

Symbols as keys of internal properties

Mixins are object fragments (sets of methods) that you can compose to augment the functionality of an object or a prototype. If their methods have symbols as keys, they can’t clash with other methods (of other mixins or of the object that they are added to), anymore.

Public methods are seen by clients of the object a mixin is added to. For usability’s sake, you probably want those methods to have string keys. Internal methods are only known to the mixin or only needed to communicate with it. They profit from having symbols as keys.

Symbols do not offer real privacy, because it is easy to find out the symbol-valued property keys of an object. But the guarantee that a property key can’t ever clash with any other property key is often enough. If you truly want to prevent the outside from accessing private data, you need to use WeakMaps or closures. For example:

    // One WeakMap per private property
    const PASSWORD = new WeakMap();
    class Login {
        constructor(name, password) {
   = name;
            PASSWORD.set(this, password);
        hasPassword(pw) {
            return PASSWORD.get(this) === pw;

The instances of Login are keys in the WeakMap PASSWORD. The WeakMap does not prevent the instances from being garbage-collected. Entries whose keys are objects that don’t exist anymore are removed from WeakMaps.

The same code looks as follows if you use a symbol key for the internal property.

    const PASSWORD = Symbol();
    class Login {
        constructor(name, password) {
   = name;
            this[PASSWORD] = password;
        hasPassword(pw) {
            return this[PASSWORD] === pw;

Symbols as keys of meta-level properties

Symbols having unique identities makes them ideal as keys of public properties that exist on a different level than “normal” property keys, because meta-level keys and normal keys must not clash. One example of meta-level properties are methods that objects can implement to customize how they are treated by a library. Using symbol keys protect the library from mistaking normal methods as customization methods.

Iterability [3] in ECMAScript 6 is one such customization. An object is iterable if it has a method whose key is the symbol (stored in) Symbol.iterator. In the following code, obj is iterable.

    let obj = {
        data: [ 'hello', 'world' ],
        [Symbol.iterator]() {
            const self = this;
            let index = 0;
            return {
                next() {
                    if (index < {
                        return {
                    } else {
                        return { done: true };

The iterability of obj enables you to use the for-of loop and similar JavaScript features:

    for (let x of obj) {

Crossing realms with symbols

A code realm (short: realm) is a context in which pieces of code exist. It includes global variables, loaded modules and more. Even though code exists “inside” exactly one realm, it may have access to code in other realms. For example, each frame in a browser has its own realm. And execution can jump from one frame to another, as the following HTML demonstrates.

            function test(arr) {
                var iframe = frames[0];
                // This code and the iframe’s code exist in
                // different realms. Therefore, global variables
                // such as Array are different:
                console.log(Array === iframe.Array); // false
                console.log(arr instanceof Array); // false
                console.log(arr instanceof iframe.Array); // true
                // But: symbols are the same
                console.log(Symbol.iterator ===
                            iframe.Symbol.iterator); // true
        <iframe srcdoc="<script>window.parent.test([])</script>">

The problem is that each realm has its own local copy of Array and, because objects have individual identities, those local copies are considered different, even though they are essentially the same object. Similarly, libraries and user code a loaded once per realm and each realm has a different version of the same object.

In contrast, members of the primitive types boolean, number and string don’t have individual identities and multiple copies of the same value are not a problem: The copies are compared “by value” (by looking at the content, not at the identity) and are considered equal.

Symbols have individual identities and thus don’t travel across realms as smoothly as other primitive values. That is a problem for symbols such as Symbol.iterator that should work across realms: If an object is iterable in one realm, it should be iterable in others, too. If a cross-realm symbol is provided by the JavaScript engine, the engine can make sure that the same value is used in each realm. For libraries, however, we need extra support, which comes in the form of the global symbol registry: This registry is global to all realms and maps strings to symbols. For each symbol, libraries need to come up with a string that is as unique as possible. To create the symbol, they don’t use Symbol(), they ask the registry for the symbol that the string is mapped to. If the registry already has an entry for the string, the associated symbol is returned. Otherwise, entry and symbol are created first.

You ask the registry for a symbol via Symbol.for() and retrieve the string associated with a symbol (its key) via Symbol.keyFor():

    > let sym = Symbol.for('Hello everybody!');
    > Symbol.keyFor(sym)
    'Hello everybody!'

As expected, cross-realm symbols, such as Symbol.iterator, that are provided by the JavaScript engine are not in the registry:

    > Symbol.keyFor(Symbol.iterator)

Safety checks

JavaScript warns you about two mistakes by throwing exceptions: Invoking Symbol as a constructor and coercing symbols to string.

Invoking Symbol as a constructor

While all other primitive values have literals, you need to create symbols by function-calling Symbol. Thus, it is relatively easy to accidentally invoke Symbol as a constructor. That produces instances of Symbol and is not very useful. Therefore, an exception is thrown when you try to do that:

    > new Symbol()
    TypeError: Symbol is not a constructor

There is still a way to create wrapper objects, instances of Symbol: Object, called as a function, converts all values to objects, including symbols.

    > let sym = Symbol();
    > typeof sym
    > let wrapper = Object(sym);
    > typeof wrapper
    > wrapper instanceof Symbol

Coercing a symbol to string

Given that both strings and symbols can be property keys, you want to protect people from accidentally converting a symbol to a string. For example, like this:

    let propertyKey = '__' + anotherPropertyKey;

ECMAScript 6 throws an exception if one uses implicit conversion to string (handled internally via the ToString operation):

    > var sym = Symbol('My symbol');
    > '' + sym
    TypeError: Cannot convert a Symbol value to a string

However, you can still explicitly convert symbols to strings:

    > String(sym)
    'Symbol(My symbol)'
    > sym.toString()
    'Symbol(My symbol)'

Frequently asked questions

Are symbols primitives or objects?

In some ways, symbols are like primitive values, in other ways, they are like objects:

  • Symbols are like strings (primitive values) w.r.t. what they are used for: as representations of concepts and as property keys.
  • Symbols are like objects in that each symbol has its own identity.

The latter point can be illustrated by using objects as colors instead of symbols:

    const COLOR_RED = Object.freeze({});

Optionally, you can make objects-as-symbols more minimal by freezing Object.create(null) instead of {}. Note that, in contrast to strings, objects can’t become property keys.

What are symbols then – primitive values or objects? In the end, they were turned into primitives, for two reasons.

First, symbols are more like strings than like objects: They are a fundamental value of the language, they are immutable and they can be used as property keys. Symbols having unique identities doesn’t necessarily contradict them being like strings: UUID algorithms produce strings that are quasi-unique.

Second, symbols are most often used as property keys, so it makes sense to optimize the JavaScript specification and the implementations for that use case. Then many abilities of objects are unnecessary:

  • Objects can become prototypes of other objects.
  • Wrapping an object with a proxy doesn’t change what it can be used for.
  • Objects can be introspected: via instanceof, Object.keys(), etc.

Them not having these abilities makes life easier for the specification and the implementations. There are also reports from the V8 team that when handling property keys, it is simpler to treat a primitive type differently than objects.

Aren’t strings enough?

In contrast to strings, symbols are unique and prevent name clashes. That is nice to have for tokens such as colors, but it is essential for supporting meta-level methods such as the one whose key is Symbol.iterator. Python uses the special name __iter__ to avoid clashes. You can reserve double underscore names for programming language mechanisms, but what is a library to do? With symbols, we have an extensibility mechanism that works for everyone. As you can see later, in the section on public symbols, JavaScript itself already makes ample use of this mechanism.

There is one hypothetical alternative to symbols when it comes to clash-free property keys: use a naming convention. For example, strings with URLs (e.g. ''). But that would introduce a second category of property keys (versus “normal” property names that are usually valid identifiers and don’t contain colons, slashes, dots, etc.), which is basically what symbols are, anyway. Then it is more elegant to explicitly turn those keys into a different kind of value.

The symbol API

This section gives an overview of the ECMAScript 6 API for symbols.

The function Symbol

  • Symbol(description?)symbol
    Creates a new symbol. The optional parameter description allows you to give the symbol a description, which is useful for debugging.

Symbol is not intended to be used as a constructor – an exception is thrown if you invoke it via new.

Public symbols

Several public symbols can be accessed via properties of Symbol. They are all used as property keys and enable you to customize how JavaScript handles an object.

Customizing basic language operations:

  • Symbol.hasInstance (method)
    Lets an object O customize the behavior of x instanceof O.

  • Symbol.toPrimitive (method)
    Lets an object customize how it is converted to a primitive value. This is the first step whenever something is coerced to a primitive type (via operators etc.).

  • Symbol.toStringTag (string)
    Called by Object.prototype.toString to compute the default string description of an object obj: '[object '+obj[Symbol.toStringTag]+']'.

Iteration [3]:

  • Symbol.iterator (method)
    Makes an object iterable. Returns an iterator.

Regular expressions: Four string methods are simply forwarded to their regular expression parameters. The methods that they are forwarded to have the following keys.

  • Symbol.match is used by String.prototype.match.
  • Symbol.replace is used by String.prototype.replace.
  • is used by
  • Symbol.split is used by String.prototype.split.


  • Symbol.unscopables (Object)
    Lets an object hide some properties from the with statement.

  • Symbol.species (method)
    Helps with cloning typed arrays and instances of RegExp, ArrayBuffer and Promise.

  • Symbol.isConcatSpreadable (boolean) Indicates whether Array.prototype.concat should concatenate the elements of an object or the object as an element.

Global symbol registry

If you want a symbol to be the same in all realms, you need to create it via the global symbol registry. The following method lets you do that:

  • Symbol.for(str)symbol
    Returns the symbol whose key is the string str in the registry. If str isn’t in the registry yet, a new symbol is created and filed in the registry under the key str.

Another method lets you make the reverse look up and found out under which key a string is stored in the registry. This is may be useful for serializing symbols.

  • Symbol.keyFor(sym)string
    returns the string that is associated with the symbol sym in the registry. If sym isn’t in the registry, this method returns undefined.

Further reading

  1. Using ECMAScript 6 today
  2. ECMAScript 6: new OOP features besides classes
  3. Iterators and generators in ECMAScript 6
Alex Young (DailyJS) @ London › England ( Feed )
Friday, 26 December 2014
Dynamic-json-resume, JSnoX

I don’t know about you, but I hate putting together my résumé. I start to focus too much on the presentation even though you’re meant to keep it simple. Dynamic-json-resume (GitHub: jrm2k6/dynamic-json-resume, License: MIT, npm: json-resume-dynamic) by Jeremy


I don’t know about you, but I hate putting together my résumé. I start to focus too much on the presentation even though you’re meant to keep it simple. Dynamic-json-resume (GitHub: jrm2k6/dynamic-json-resume, License: MIT, npm: json-resume-dynamic) by Jeremy Dagorn is a module for generating résumés from a simple JSON format. You can output PDFs, and use it with a Node application.

Because your CV is now represented by a structured data format, you can reuse it in other places. For example, your personal website could render it in a sidebar.

James Long’s article, Removing User Interface Complexity, or Why React is Awesome, inspired the project. React seems like the perfect way to manipulate and render your JSON CV.


What do you do if you like React but dislike JSX? Shawn Price sent in his coworker’s project, JSnoX (GitHub: af/JSnoX, License: MIT, npm: jsnox), which provides a simple React markup API that works in pure JavaScript:

var d = require('jsnox')(React)
var LoginForm = React.createClass({
  submitLogin: function() { ... },

  render: function() {
    return d('form[method=POST]', { onSubmit: this.submitLogin }, [
      d('h1.form-header', 'Login'),
      d('input:email[name=email]', { placeholder: 'Email' }),
      d('input:password[name=pass]', { placeholder: 'Password' }),
      d(MyOtherComponent, { myProp: 'foo' }),
      d('button:submit', 'Login')

This API sidesteps the issue of JavaScript’s lack of multiline string handling for embedded templates, while not requiring too much fiddly syntax for handling DOM attributes.

Alex Young (DailyJS) @ London › England ( Feed )
Thursday, 25 December 2014
Holiday Hacking: Apps

What do you do when you leave your computer/laptop at home while you visit family for the holidays? I always do this, thinking that it’ll be better to spend some quality time with the family, but there are moments where people are doing their own thing and I wish I had a laptop to play

What do you do when you leave your computer/laptop at home while you visit family for the holidays? I always do this, thinking that it’ll be better to spend some quality time with the family, but there are moments where people are doing their own thing and I wish I had a laptop to play with some code.

These days of course many of us tote around tablets or large phones, so there is some potential for hacking or at least learning about new programming techniques. One of my favourite apps is actually Vim, which you can get for iOS and Android:

To the uninitiated, Vim for a touchscreen sounds horrible, but it’s actually pretty good – because it’s modal you can enter Command-line mode with : and leave with the escape key easily enough. If you’re an experienced Vim user then it can be revealing to think about the mnemonics for commands rather than relying on muscle memory.

I also found that the big programming video services have iOS and Android apps, so you can study a new programming language, framework, or library:

I’ve actually used the Pluralsight app on Android during my commute to help me learn enough C# to implement the Windows portion of a Node/Windows/iOS/Mac application I work on professionally.

Because tablet operating systems support browsers, then there are a lot of apps that wrap the built-in JavaScript interpreter to allow you to practice writing JavaScript. For example:

And you can even write Node on iOS with Node - JavaScript Interpreter. There are manuals for Node on the Play Store as well. Or you can get a more broad manual management app like Dash. I found Dash useful for looking up Mozilla’s JavaScript documentation, and Node’s, when I was working offline on my Node book.

Apple and Google’s book stores also sell many technical books from the popular computer science book publishers, so you should be able to find something to do while your parents argue and your partner is walking the dog, wrangling toddlers, or snoozing after too much turkey.

Alex Young (DailyJS) @ London › England ( Feed )
Wednesday, 24 December 2014
Node Roundup: 0.10.35, Prettiest, Artisan Validator
Node 0.10.35

Node 0.10.35 was released today, which has some changes to timers relating to the unref behaviour:

  • timers: don’t close interval timers when unrefd (Julien Gilli)
  • timers: don’t mutate unref list while iterating it (Julien Gilli)

Node 0.10.35

Node 0.10.35 was released today, which has some changes to timers relating to the unref behaviour:

  • timers: don’t close interval timers when unrefd (Julien Gilli)
  • timers: don’t mutate unref list while iterating it (Julien Gilli)

This was released soon after 0.10.34, which updated v8, uv, zlib, and some core modules including child_process and crypto.


What if you want to prevent a command-line script from executing more than once? Prettiest (GitHub: punkave/prettiest, License: MIT, npm: prettiest) from P’unk Avenue LLC combines data storage and locking, and should work well for Node command-line scripts made with modules like ShellJS.

This is the simplest example – it will track how many times it has been run:

var data = require('prettiest')();

data.count = data.count || 0;
console.log('I have been run', data.count, ' times.');

I’ve often wanted to persist data in command-line scripts, but didn’t want to bother with sqlite or JSON file serialisation, so this seems ideal for such cases. And even if you want the locking behaviour, your scripts can still be asynchronous.

Artisan Validator

Connor Peet is on a quest to create a simple and well-documented data validator for Node. Artisan Validator (GitHub: MCProHosting/artisan-validator, License: MIT, npm: artisan-validator) allows you to define rules that get validated against objects, so you can easily hook it into a Node web application:

var validator = require('artisan-validator')();
var rules = {
  username: ['required', 'between: 4, 30', 'alphanumeric'],
  password: ['required', 'longer: 5'],
  acceptTOS: ['required', 'boolean: true']

validator.try(req.body, rules).then(function (result) {
  if (result.failed) {
    res.json(400, result.errors);
  } else {

You can add custom validators with validator.validators.add, but there are quite a few built-in rules that cover JavaScript types, and various string and date formats. The error messages can be localised as well.

Alex Young (DailyJS) @ London › England ( Feed )
Tuesday, 23 December 2014
Particle Paintings, AMD to CommonJS with Recast

Tadeu Zagallo sent in a Canvas experiment that uses typed arrays, optimised sorting algorithms, and inlining and bitwise operators to boost performance. The Particture demo allows you to use your webcam for the source image, and draws images with a cool trail effect.



Tadeu Zagallo sent in a Canvas experiment that uses typed arrays, optimised sorting algorithms, and inlining and bitwise operators to boost performance. The Particture demo allows you to use your webcam for the source image, and draws images with a cool trail effect.

The repository at tadeuzagallo/particture has the source, and it uses dat.gui for the controls.


Many readers seem to be searching for solutions to the module refactor problem, where older projects are refactored to use modern module systems. Dustan Kasten wanted to convert projects that use AMD to CommonJS, and he’s used Recast to do this, through the recast-to-cjs project that is published by his company (Skookum Digital Works).

Dustan has written an article that shows how to convert a project to CommonJS: Converting a project from AMD to CJS with Recast. The AST is traversed to find AMD definitions, and then converted into equivalent CommonJS dependencies.

It’s possible that Node developers may end up doing something like this if ES6 modules become the norm, although I suspect ES6’s export and import statements will need manual intervention to take advantage of import obj from lib.

ECMAScript 6: new OOP features besides classes

Classes [2] are the major new OOP feature in ECMAScript 6 [1]. However, it also includes new features for object literals and new utility methods in Object. This blog post describes them.

New features of object literalsMethod definitions

In ECMAScript 5, methods are properties

Classes [2] are the major new OOP feature in ECMAScript 6 [1]. However, it also includes new features for object literals and new utility methods in Object. This blog post describes them.

New features of object literals

Method definitions

In ECMAScript 5, methods are properties whose values are functions:

    var obj = {
        myMethod: function () {

In ECMAScript 6, methods are still function-valued properties, but there is now a more compact way of defining them:

    let obj = {
        myMethod() {

Getters and setters continue to work as they did in ECMAScript 5 (note how syntactically similar they are to method definitions):

    let obj = {
        get foo() {
            console.log('GET foo');
            return 123;
        set bar(value) {
            console.log('SET bar to '+value);
            // return value is ignored

Let’s use obj:

    GET foo
    > = true
    SET bar to true

There is also a way to concisely define properties whose values are generator functions [3]:

    let obj = {
        * myGeneratorMethod() {

This code is equivalent to:

    let obj = {
        myGeneratorMethod: function* () {

Property value shorthands

Property value shorthands let you abbreviate the definition of a property in an object literal: If the name of the variable that specifies the property value is also the property key then you can omit the key. This looks as follows.

    let x = 4;
    let y = 1;
    let obj = { x, y };

The last line is equivalent to:

    let obj = { x: x, y: y };

Property value shorthands work well together with destructuring [4]:

    let obj = { x: 4, y: 1 };
    let {x,y} = obj;
    console.log(x); // 4
    console.log(y); // 1

One use case for property value shorthands are multiple return values [4].

Computed property keys

Remember that there are two ways of specifying a key when you set a property.

  1. Via a fixed name: = true;
  2. Via an expression: obj['b'+'ar'] = 123;

In object literals, you only have option #1 in ECMAScript 5. ECMAScript 6 additionally provides option #2:

    let propKey = 'foo';
    let obj = {
        [propKey]: true,
        ['b'+'ar']: 123

This new syntax can also be combined with a method definition:

    let obj = {
        ['h'+'ello']() {
            return 'hi';
    console.log(obj.hello()); // hi

The main use case for computed property keys are symbols: you can define a public symbol and use it as a special property key that is always unique. One prominent example is the symbol stored in Symbol.iterator. If on object has a method with that key, it becomes iterable [3]. The method must return an iterator, which is used by constructs such as the for-of loop to iterate over the object. The following code demonstrates how that works.

    let obj = {
        * [Symbol.iterator]() { // (A)
            yield 'hello';
            yield 'world';
    for (let x of obj) {
    // Output:
    // hello
    // world

Line A starts a generator method definition with a computed key (the symbol stored in Symbol.iterator).

New methods of Object

Object.assign(target, source_1, source_2, ···)

This method merges the sources into the target: It modifies target, first copies all enumerable own properties of source_1 into it, then all own properties of source_2, etc. At the end, it returns the target.

    let obj = { foo: 123 };
    Object.assign(obj, { bar: true });
        // {"foo":123,"bar":true}

Let’s look more close at how Object.assign() works:

  • Both kinds of property keys: Object.assign() supports both strings and symbols as property keys.

  • Only enumerable own properties: Object.assign() ignores inherited properties and properties that are not enumerable.

  • Copying via assignment: Properties in the target object are created via assignment (internal operation [[Put]]). That means that if target has (own or inherited) setters, those will be invoked during copying. An alternative would have been to define new properties, an operation which always creates new own properties and never invokes setters. There originally was a proposal for a variant of Object.assign() that uses definition instead of assignment. That proposal has been rejected for ECMAScript 6, but may be reconsidered for later editions.

Use cases for Object.assign()

Let’s look at a few use cases. You can use Object.assign() to add properties to this in a constructor:

    class Point {
        constructor(x, y) {
            Object.assign(this, {x, y});

Object.assign() is also useful for filling in defaults for missing properties. In the following example, we have an object DEFAULTS with default values for properties and an object options with data.

    const DEFAULTS = {
        logLevel: 0,
        outputFormat: 'html'
    function processContent(options) {
        let options = Object.assign({}, DEFAULTS, options); // (A)

In line A, we created a fresh object, copied the defaults into it and then copied options into it, overriding the defaults. Object.assign() returns the result of these operations, which we assign to options.

Another use case is adding methods to objects:

    Object.assign(SomeClass.prototype, {
        someMethod(arg1, arg2) {
        anotherMethod() {

You could also assign functions, but then you don’t have the nice method definition syntax and need to mention SomeClass.prototype each time:

    SomeClass.prototype.someMethod = function (arg1, arg2) {
    SomeClass.prototype.anotherMethod = function () {

One last use case for Object.assign() is a quick way of cloning objects:

    function clone(orig) {
        return Object.assign({}, orig);

This way of cloning is also somewhat dirty, because it doesn’t preserve the property attributes of orig. If that is what you need, you have to use property descriptors.

If you want the clone to have the same prototype as the original, you can use Object.getPrototypeOf() and Object.create():

    function clone(orig) {
        let origProto = Object.getPrototypeOf(orig);
        return Object.assign(Object.create(origProto), orig);


In ECMAScript 6, the key of a property can be either a string or a symbol. There are now five tool methods that retrieve the property keys of an object obj:

  • Object.keys(obj)Array<string>
    retrieves all string-valued keys of all enumerable own properties.

  • Object.getOwnPropertyNames(obj)Array<string>
    retrieves all string-valued keys of all own properties.

  • Object.getOwnPropertySymbols(obj)Array<symbol>
    retrieves all symbol-valued keys of all own properties.

  • Reflect.ownKeys(obj)Array<string|symbol>
    retrieves all keys of all own properties.

  • Reflect.enumerate(obj)Iterator
    retrieves all string-values keys of all enumerable properties., value2)

The strict equals operator (===) treats two values differently than one might expect.

First, NaN is not equal to itself.

    > NaN === NaN

That is unfortunate, because it often prevents us from detecting NaN:

    > [0,NaN,2].indexOf(NaN)

Second, JavaScript has two zeros, but strict equals treats them as if they were the same value:

    > -0 === +0

Doing this is normally a good thing. provides a way of comparing values that is a bit more precise than ===. It works as follows:

    >, NaN)
    >, +0)

Everything else is compared as with ===.

If we combine with the new ECMAScript 6 array method findIndex() [5], we can find NaN in arrays:

    > [0,NaN,2].findIndex(x =>, NaN))

Object.setPrototypeOf(obj, proto)

This method sets the prototype of obj to proto. The non-standard way of doing so in ECMAScript 5, that is supported by many engines, is via assinging to the special property __proto__. The recommended way of setting the prototype remains the same as in ECMAScript 5: during the creation of an object, via Object.create(). That will always be faster than first creating an object and then setting its prototype. Obviously, it doesn’t work if you want to change the prototype of an existing object.


  1. Using ECMAScript 6 today
  2. ECMAScript 6: classes
  3. Iterators and generators in ECMAScript 6
  4. Multiple return values in ECMAScript 6
  5. ECMAScript 6’s new array methods
Addy Osmani @ London › England ( Feed )
Saturday, 20 December 2014
JavaScript Application Architecture On The Road To 2015
In my new write-up on Medium, I look at the state of application architecture in the JavaScript community as we ebb our way towards 2015. In it, I talk about composition, functional boundaries, modularity, immutable data structures, CSP channels and … Continue reading →
In my new write-up on Medium, I look at the state of application architecture in the JavaScript community as we ebb our way towards 2015. In it, I talk about composition, functional boundaries, modularity, immutable data structures, CSP channels and … Continue reading
One JavaScript: avoiding versioning in ECMAScript 6

What is the best way to add new features to a language? This blog post describes the approach taken by ECMAScript 6 [3], the next version of JavaScript. It is called One JavaScript, because it avoids versioning.


In principle, a new version of a language is a chance to clean it u

What is the best way to add new features to a language? This blog post describes the approach taken by ECMAScript 6 [3], the next version of JavaScript. It is called One JavaScript, because it avoids versioning.


In principle, a new version of a language is a chance to clean it up, by removing outdated features or by changing how features work. That means that new code doesn’t work in older implementations of the language and that old code doesn’t work in a new implementation. Each piece of code is linked to a specific version of the language. Two approaches are common for dealing with versions being different.

First, you can take an “all or nothing” approach and demand that, if a code base wants to use the new version, it must be upgraded completely. Python took that approach when upgrading from Python 2 to Python 3. A problem with it is that it may not be feasible to migrate all of an existing code base at once, especially if it is large. Furthermore, the approach is not an option for the web, where you’ll always have old code and where JavaScript engines are updated automatically.

Second, you can permit a code base to contain code in multiple versions, by tagging code with versions. On the web, you could tag ECMAScript 6 code via a dedicated Internet media type. Such a media type can be associated with a file via an HTTP header:

    Content-Type: application/ecmascript;version=6

It can also be associated via the type attribute of the <script> element (whose default value is text/javascript):

    <script type="application/ecmascript;version=6">

This specifies the version out of band, externally to the actual content. Another option is to specify the version inside the content (in-band). For example, by starting a file with the following line:

    use version 6;

Both ways of tagging are problematic: out-of-band versions are brittle and can get lost, in-band versions add clutter to code.

A more fundamental issue is that allowing multiple versions per code base effectively forks a language into sub-languages that have to be maintained in parallel. This causes problems:

  • Engines become bloated, because they need to implement the semantics of all versions. The same applies to tools analyzing the language (e.g. style checkers such es JSLint).
  • Programmers need to remember how the versions differ.
  • Code becomes harder to refactor, because you need to take versions into consideration when you move pieces of code.

Therefore, versioning is something to avoid, especially for JavaScript and the web.

Evolution without versioning

But how can we get rid of versioning? By always being backwards-compatible. That means we must give up some of our ambitions w.r.t. cleaning up JavaScript:

  • We can’t introduce breaking changes: Being backwards-compatible means not removing features and not changing features. The slogan for this principle is: “don’t break the web”.
  • We can, however, add new features and make existing features more powerful.

As a consequence, no versions are needed for new engines, because they can still run all old code. David Herman calls this approach to avoiding versioning One JavaScript (1JS) [1], because it avoids splitting up JavaScript into different versions or modes. As we shall see later, 1JS even undoes some of a split that already exists, due to strict mode.

Supporting new code on old engines is more complicated. You have to detect in the engine what version of the language it supports. If it doesn’t support the latest version, you have to load different code: your new code compiled to an older version. That is how you can already use ECMAScript 6 in current engines: you compile it to ECMAScript 5 [3]. Apart from performing the compilation step ahead of time, you also have the option of compiling in the engine, at runtime.

Detecting versions is difficult, because many engines support parts of versions before they support them completely. For example, this is how you’d check whether an engine supports ECMAScript 6’s for-of loop – but that may well be the only ES6 feature it supports:

    function isForOfSupported() {
        try {
            eval("for (var e of ['a']) {}");
            return true;
        } catch (e) {
            // Possibly: check if e instanceof SyntaxError
        return false;

Mark Miller describes how the Caja library detects whether an engine supports ECMAScript 5. He expects detection of ECMAScript 6 to work similarly, eventually.

One JavaScript does not mean that you have to completely give up on cleaning up the language. Instead of cleaning up existing features, you introduce new, clean, features. One example for that is let, which declares block-scoped variables and is an improved version of var. It does not, however, replace var, it exists alongside it, as the superior option.

One day, it may even be possible to eliminate features that nobody uses, anymore. Some of the ES6 features were designed by surveying JavaScript code on the web. Two examples (that are explained in more detail later) are:

  • let is available in non-strict mode, because let[x] rarely appears on the web.
  • Function declarations do occasionally appear in non-strict blocks, which is why the ES6 specification describes measures that web browsers can take to ensure that such code doesn’t break.

Strict mode and ECMAScript 6

Strict mode was introduced in ECMAScript 5 to clean up the language. It is switched on by putting the following line first in a file or in a function:

    'use strict';

Strict mode introduces three kinds of breaking changes:

  • Syntactic changes: some previously legal syntax is forbidden in strict mode. For example:
    • The with statement is forbidden. It lets users add arbitrary objects to the chain of variable scopes, which slows down execution and makes it tricky to figure out what a variable refers to.
    • Deleting an unqualified identifier (a variable, not a property) is forbidden.
    • Functions can only be declared at the top level of a scope.
    • More identifiers are reserved: implements interface let package private protected public static yield
  • More errors. For example:
    • Assigning to an undeclared variable causes a ReferenceError. In sloppy mode, a global variable is created in this case.
    • Changing read-only properties (such as the length of a string) causes a TypeError. In non-strict mode, it simply has no effect.
  • Different semantics: Some constructs behave differently in strict mode. For example:
    • arguments doesn’t track the current values of parameters, anymore.
    • this is undefined in non-method functions. In sloppy mode, it refers to the global object (window), which meant that global variables were created if you called a constructor without new.

Strict mode is a good example of why versioning is tricky: Even though it enables a cleaner version of JavaScript, its adoption is still relatively low. The main reasons are that it breaks some existing code, can slow down execution and is a hassle to add to files (let alone interactive command lines). I love the idea of strict mode and don’t nearly use it often enough.

Supporting sloppy mode

One JavaScript means that we can’t give up on sloppy mode: it will continue to be around (e.g. in HTML attributes). Therefore, we can’t build ECMAScript 6 on top of strict mode, we must add its features to both strict mode and non-strict mode (a.k.a. sloppy mode). Otherwise, strict mode would be a different version of the language and we’d be back to versioning. Unfortunately, two ECMAScript 6 features are difficult to add to sloppy mode: let declarations and block-level function declarations. Let’s examine why that is and how to add them, anyway.

let declarations in sloppy mode

let enables you to declare block-scoped variables. It is difficult to add to sloppy mode, because let is only a reserved word in strict mode. That is, the following two statements are legal in ECMAScript 5 sloppy mode:

    var let = [];
    let[0] = 'abc';

In strict ECMAScript 6, you get an exception in line 1, because you are using the reserved word let as a variable name. And the statement in line 2 is interpreted as a let variable declaration.

In sloppy ECMAScript 6, the first line does not cause an exception, but the second line is still interpreted as a let declaration. The pattern in that line is rare enough that ES6 can afford to make this interpretation. Other ways of writing let declarations can’t be mistaken for existing sloppy syntax:

    let foo = 123;
    let {x,y} = computeCoordinates();

Block-level function declarations in sloppy mode

ECMAScript 5 strict mode forbids function declarations in blocks. The specification allowed them in sloppy mode, but didn’t specify how they should behave. Hence, various implementations of JavaScript support them, but handle them differently.

ECMAScript 6 wants a function declaration in a block to be local to that block. That is OK as an extension of ES5 strict mode, but breaks some sloppy code. Therefore, ES6 provides “web legacy compatibility semantics” for browsers that lets function declarations in blocks exist at function scope.

Other keywords

The identifiers yield and static are only reserved in ES5 strict mode. ECMAScript 6 uses context-specific syntax rules to make them work in sloppy mode:

  • In sloppy mode, yield is only a reserved word inside a generator function.
  • static is currently only used inside class literals, which are implicitly strict (see below).

Implicit strict mode

The bodies of modules and classes are implicitly in strict mode in ECMAScript 6 – there is no need for the 'use strict' marker. Given that virtually all of our code will live in modules in the future, ECMAScript 6 effectively upgrades the whole language to strict mode.

The bodies of other constructs (such as arrow functions and generator functions) could have been made implicitly strict, too. But given how small these constructs usually are, using them in sloppy mode would have resulted in code that is fragmented between the two modes. Classes and especially modules are large enough to make fragmentation less of an issue.

It is interesting to note that, inside a <script> element, you can’t declaratively import modules via an import statement. Instead, there will be a new element, which may be called <module>, whose insides are much like a module [2]: Modules can be imported asynchronously and code is implicitly strict and not in global scope (variables declared at the top level are not global).

Another way of importing a module, that works inside both elements, is the programmatic System.import() API that returns a module asynchronously, via a promise.

Things that can’t be fixed

The downside of One JavaScript is that you can’t fix existing quirks, especially the following two.

First, typeof null should return the string 'null' and not 'object'. But fixing that would break existing code. On the other hand, adding new results for new kinds of operands is OK, because current JavaScript engines already occasionally return custom values for host objects. One example are ECMAScript 6’s symbols:

    > typeof Symbol.iterator

Second, the global object (window in browsers) shouldn’t be in the scope chain of variables. But it is also much too late to change that now. At least you won’t be in global scope in modules and within <module> elements.


One JavaScript means making ECMAScript 6 completely backwards compatible. It is great that that succeeded. Especially appreciated is that modules (and thus most of our code) are implicitly in strict mode.

In the short term, adding ES6 constructs to both strict mode and sloppy mode is more work when it comes to writing the language specification and to implementing it in engines. In the long term, both the spec and engines profit from the language not being forked (less bloat etc.). Programmers profit immediately from One JavaScript, because it makes it easier to get started with ECMAScript 6.

Further reading

  1. The original 1JS proposal (warning: out of date): “ES6 doesn’t need opt-in” by David Herman.
  2. ECMAScript 6 modules in future browsers
  3. Using ECMAScript 6 today (overview plus links to more in-depth material)
Web Components Articles ( Feed )
Monday, 15 December 2014
Mozilla and Web Components: Update

Mozilla has a long history of participating in standards development. The post below shows a real-time slice of how standards are debated and adopted. The goal is to update developers who are most affected by implementation decisions we make in Firefox. We are particularly interested in getting fe

Mozilla has a long history of participating in standards development. The post below shows a real-time slice of how standards are debated and adopted. The goal is to update developers who are most affected by implementation decisions we make in Firefox. We are particularly interested in getting feedback from JavaScript library and framework developers.

Web Components + Backbone: A Game-Changing Combination

Web Components promise to change how we think about modularity on the web, and when combined with the structure and organization of Backbone.js we can create portable, dynamic, encapsulated UI modules that fit into any web application.

Web Components open up new, low-level

Web Components promise to change how we think about modularity on the web, and when combined with the structure and organization of Backbone.js we can create portable, dynamic, encapsulated UI modules that fit into any web application.

Web Components open up new, low-level interfaces for developers to create modules on the web with Custom Elements, HTML Templates, HTML Imports, and the Shadow DOM. These are exciting new technologies for web modularity, but on their own they can provide neither the rich interactivity nor maintainable structure we’ve come to expect in our JavaScript web applications. Other JS libraries are already exploring mechanisms for integrating Web Components, but Backbone.js, with its light-weight, flexible API, is in a unique position to provide a solid foundation for UI modules, and indeed entire UI libraries, built with Web Components.

This talk will provide an introduction to Web Components, but will focus on how Backbone can utilize each of their APIs to create well-structured UI modules to be reused and shared between web applications. It will present patterns for creating these modules and consider best practices for creating components in sharable UI libraries. And while browser support for Web Components is rapidly improving, this talk will also consider the polyfills available to start using Web Components in Backbone.js applications today.

Web Components and Backbone.js complement each other and, together, are a revolutionary pair that offers new and exciting approaches for developing interactive UI modules on the web.



Meta programming with ECMAScript 6 proxies

This blog post explains the ECMAScript 6 (ES6) feature proxies. Proxies enable you to intercept and customize operations performed on objects (such as getting properties). They are a meta programming feature.

The code in this post occasionally uses other ES6 features. Consult “Using ECMAS

This blog post explains the ECMAScript 6 (ES6) feature proxies. Proxies enable you to intercept and customize operations performed on objects (such as getting properties). They are a meta programming feature.

The code in this post occasionally uses other ES6 features. Consult “Using ECMAScript 6 today” for an overview of all of ES6.

Before we can get into what proxies are and why they are useful, we first need to understand what meta programming is.

Programming versus meta programming

In programming, there are levels:

  • At the base level (also called: application level), code processes user input.
  • At the meta level, code processes base level code.

Base and meta level can be diffent languages. In the following meta program, the meta programming language is JavaScript and the base programming language is Java.

    let str = 'Hello' + '!'.repeat(3);

Meta programming can take different forms. In the previous example, we have printed Java code to the console. Let’s use JavaScript as both meta programming language and base programming language. The classic example for this is the eval() function, which lets you evaluate/compile JavaScript code on the fly. There are very few actual use cases for eval(). In the interaction below, we use it to evaluate the expression 5 + 2.

    > eval('5 + 2')

Other JavaScript operations may not look like meta programming, but actually are, if you look closer:

    // Base level
    let obj = {
        hello() {
    // Meta level
    for (let key of Object.keys(obj)) {

The program is examining its own structure while running. This doesn’t look like meta programming, because the separation between programming constructs and data structures is fuzzy in JavaScript. All of the Object.* methods can be considered meta programming functionality.

Kinds of meta programming

Reflective meta programming means that a program processes itself. Kiczales et al. [2] distinguish three kinds of reflective meta programming:

  • Introspection: you have read-only access to the structure of a program.
  • Self-modification: you can change that structure.
  • Intercession: you can redefine the semantics of some language operations.

Let’s look at examples.

Example: introspection. Object.keys() performs introspection (see previous example).

Example: self-modification. The following function moveProperty moves a property from a source to a target. It performs self-modification via the bracket operator for property access, the assignment operator and the delete operator. (In production code, you’d probably use property descriptors for this task.)

    function moveProperty(source, propertyName, target) {
        target[propertyName] = source[propertyName];
        delete source[propertyName];

Using moveProperty():

    > let obj1 = { prop: 'abc' };
    > let obj2 = {};
    > moveProperty(obj1, 'prop', obj2);
    > obj1
    > obj2
    { prop: 'abc' }

JavaScript doesn’t currently support intercession, proxies were created to fill that gap.

An overview of proxies

ECMAScript 6 proxies bring intercession to JavaScript. They work as follows. There are many operations that you can perform on an object obj. For example:

  • Getting a property prop (via obj.prop)
  • Listing enumerable own properties (via Object.keys(obj))

Proxies are special objects that allow you to provide custom implementations for some of these operations. A proxy is created with two parameters:

  • handler: For each operation, there is a corresponding handler method that – if present – performs that operation. Such a method intercepts the operation (on its way to the target) and is called a trap (a term borrowed from the domain of operating systems).
  • target: If the handler doesn’t intercept an operation then it is performed on the target. That is, it acts as a fallback for the handler. In a way, the proxy wraps the target.

In the following example, the handler intercepts the operations get (getting properties) and ownKey (retrieving the own property keys).

    let target = {};
    let handler = {
        get(target, propKey, receiver) {
            console.log('get ' + propKey);
            return 123;
        ownKeys(target) {
            return ['hello', 'world'];
    let proxy = new Proxy(target, handler);

When we get property foo, the handler intercepts that operation:

    get foo

Similarly, Object.keys() triggers ownKeys:

    > Object.keys(proxy)
    [ 'hello', 'world' ]

The handler doesn’t implement the trap set (setting properties). Therefore, setting is forwarded to target and leads to being set.

    > = 'abc';

Function-specific traps

If the target is a function, two additional operations can be intercepted:

  • apply: Making a function call, triggered via proxy(···),···), proxy.apply(···).
  • construct: Making a constructor call, triggered via new proxy(···).

The reason for only enabling these traps for function targets is simple: You wouldn’t be able to forward the operations apply and construct, otherwise.

Revocable proxies

ECMAScript 6 lets you create proxies that can be revoked (switched off):

    let {proxy, revoke} = Proxy.revocable(target, handler);

On the left hand side of the assignment operator (=), we are using destructuring to access the properties proxy and revoke of the object returned by Proxy.revocable().

After you call the function revoke for the first time, any operation you apply to proxy causes a TypeError. Subsequent calls of revoke have no further effect.

    let target = {}; // Start with an empty object
    let handler = {}; // Don’t intercept anything
    let {proxy, revoke} = Proxy.revocable(target, handler); = 123;
    console.log(; // 123
    console.log(; // TypeError: Revoked

Proxies as prototypes

A proxy proto can become the prototype of an object obj. Some operations that begin in obj may continue in proto. One such operation is get.

    let proto = new Proxy({}, {
        get(target, propertyKey, receiver) {
            console.log('GET '+propertyKey);
            return target[propertyKey];
    let obj = Object.create(proto);
    obj.bla; // Output: GET bla

The property bla can’t be found in obj, which is why the search continues in proto and the trap get is triggered there. There are more operations that affect prototypes, they are listed at the end of this post.

Forwarding operations

Operations whose traps the handler doesn’t implement are automatically forwarded to the target. Sometimes there is some task you want to perform in addition to forwarding the operation. For example, a handler that intercepts all operations and logs them, but doesn’t prevent them from reaching the target:

    let handler = {
        deleteProperty(target, propKey) {
            console.log('DELETE ' + propKey);
            return delete target[propKey];
        has(target, propKey) {
            console.log('HAS ' + propKey);
            return propKey in target;
        // Other traps: similar

For each trap, we first log the name of the operation and then forward it by performing it manually. ECMAScript 6 has the module-like object Reflect that helps with forwarding: for each trap

    handler.trap(target, arg_1, ···, arg_n)

Reflect has a method

    Reflect.trap(target, arg_1, ···, arg_n)

If we use Reflect, the previous example looks as follows.

    let handler = {
        deleteProperty(target, propKey) {
            console.log('DELETE ' + propKey);
            return Reflect.deleteProperty(target, propKey);
        has(target, propKey) {
            console.log('HAS ' + propKey);
            return Reflect.has(target, propKey);
        // Other traps: similar

Now what each of the traps does is so similar that we can implement the handler via a proxy:

    let handler = new Proxy({}, {
        get(target, trapName, receiver) {
            // Return the handler method named trapName
            return function (...args) {
                // Slice away target object in args[0]
                console.log(trapName.toUpperCase()+' '+args.slice(1));
                // Forward the operation
                return Reflect[trapName](...args);

For each trap, the proxy asks for a handler method via the get operation and we give it one. That is, all of the handler methods can be implemented via the single meta method get. It was one of the goals for the proxy API to make this kind of virtualization simple.

Let’s use this proxy-based handler:

    > let target = {};
    > let proxy = new Proxy(target, handler);
    > = 123;
    SET foo,123,[object Object]
    GET foo,[object Object]

The following interaction confirms that the set operation was correctly forwarded to the target:


Use cases for proxies

This section demonstrates what proxies can be used for. That will also give you the opportunity to see the API in action.

Implementing the DOM in JavaScript

The browser Document Object Model (DOM) is usually implemented as a mix of JavaScript and C++. Implementing it in pure JavaScript is useful for:

  • Emulating a browser environment, e.g. to manipulate HTML in Node.js. jsdom is one library that does that.
  • Speeding the DOM up (switching between JavaScript and C++ costs time).

Alas, the standard DOM can do things that are not easy to replicate in JavaScript. For example, most DOM collections are live views on the current state of the DOM that change dynamically whenever the DOM changes. As a result, pure JavaScript implementations of the DOM are not very efficient. One of the reasons for adding proxies to JavaScript was to help write more efficient DOM implementations.

Accessing a restful web service

A proxy can be used to create an object on which arbitrary methods can be invoked. In the following example, the function createWebService creates one such object, service. Invoking a method on service retrieves the contents of the web service resource with the same name. Retrieval is handled via an ECMAScript 6 promise.

    let service = createWebService('');
    // Read JSON data in
    service.employees().then(json => {
        let employees = JSON.parse(json);

The following code is a quick and dirty implementation of createWebService in ECMAScript 5. Because we don’t have proxies, we need to know beforehand what methods will be invoked on service. The parameter propKeys provides us with that information, it holds an array with method names.

    function createWebService(baseUrl, propKeys) {
        let service = {};
        propKeys.forEach(function (propKey) {
            Object.defineProperty(service, propKey, {
                get: function () {
                    return httpGet(baseUrl+'/'+propKey);
        return service;

The ECMAScript 6 implementation of createWebService can use proxies and is simpler:

    function createWebService(baseUrl) {
        return new Proxy({}, {
            get(target, propKey, receiver) {
                return httpGet(baseUrl+'/'+propKey);

Both implementations use the following function to make HTTP GET requests (how it works is explained in the 2ality blog post on promises).

    function httpGet(url) {
        return new Promise(
            (resolve, reject) => {
                let request = new XMLHttpRequest();
                Object.assign(request, {
                    onreadystatechange() {
                        if (this.status === 200) {
                            // Success
                        } else {
                            // Something went wrong (404 etc.)
                            reject(new Error(this.statusText));
                    onerror() {
                        reject(new Error(
                            'XMLHttpRequest Error: '+this.statusText));
      'GET', url);

Tracing property accesses

The example in this section is inspired by Brendan Eich’s talk “Proxies are Awesome”: We want to trace when a given set of properties is read or changed. To demonstrate how that works, let’s create a class for points and trace accesses to the properties of an instance.

    class Point {
        constructor(x, y) {
            this.x = x;
            this.y = y;
        toString() {
            return 'Point('+this.x+','+this.y+')';
    // Trace accesses to properties `x` and `y`
    let p = new Point(5, 7);
    p = tracePropAccess(p, ['x', 'y']);

Getting and setting properties of p now has the following effects:

    > p.x
    GET x
    > p.x = 21
    SET x=21

Intriguingly, tracing also works whenever Point accesses the properties, because this now refers to the proxy, not to an instance of Point.

    > p.toString()
    GET x
    GET y

In ECMAScript 5, you’d implement tracePropAccess() as follows. We replace each property with a getter and a setter that traces accesses. The setters and getters use an extra object, propData, to store the data of the properties. Note that we are destructively changing the original implementation, which means that we are meta programming.

    function tracePropAccess(obj, propKeys) {
        // Store the property data here
        let propData = Object.create(null);
        // Replace each property with a getter and a setter
        propKeys.forEach(function (propKey) {
            propData[propKey] = obj[propKey];
            Object.defineProperty(obj, propKey, {
                get: function () {
                    console.log('GET '+propKey);
                    return propData[propKey];
                set: function (value) {
                    console.log('SET '+propKey+'='+value);
                    propData[propKey] = value;
        return obj;

In ECMAScript 6, we can use a simpler, proxy-based solution. We intercept property getting and setting and don’t have to change the implementation.

    function tracePropAccess(obj, propKeys) {
        let propKeySet = new Set(...propKeys);
        return new Proxy(obj, {
            get(target, propKey, receiver) {
                if (propKeySet.has(propKey)) {
                    console.log('GET '+propKey);
                return Reflect.get(target, propKey, receiver);
            set(target, propKey, value, receiver) {
                if (propKeySet.has(propKey)) {
                    console.log('SET '+propKey+'='+value);
                return Reflect.set(target, propKey, value, receiver);

Warning about unknown properties

When it comes to accessing properties, JavaScript is very forgiving. For example, if you try to read a property and misspell its name, you don’t get an exception, you get the result undefined. You can use proxies to get an exception in such a case. This works as follows. We make the proxy a prototype of an object.

If a property isn’t found in the object, the get trap of the proxy is triggered. If the property doesn’t even exist in the prototype chain after the proxy, it really is missing and we throw an exception. Otherwise, we return the value of the inherited property. We do so by forwarding the get operation to the target, whose prototype is the prototype of the proxy.

    let PropertyChecker = new Proxy({}, {
        get(target, propKey, receiver) {
            if (!(propKey in target)) {
                throw new ReferenceError('Unknown property: '+propKey);
            return Reflect.get(target, propKey, receiver);

Let’s use PropertyChecker for an object that we create:

    > let obj = { __proto__: PropertyChecker, foo: 123 };
    >  // own
    ReferenceError: Unknown property: fo
    > obj.toString()  // inherited
    '[object Object]'

If we turn PropertyChecker into a constructor, we can use it for ECMAScript 6 classes via extends:

    function PropertyChecker() { }
    PropertyChecker.prototype = new Proxy(···);
    class Point extends PropertyChecker {
        constructor(x, y) {
            this.x = x;
            this.y = y;
    let p = new Point(5, 7);
    console.log(p.x); // 5
    console.log(p.z); // ReferenceError

If you are worried about accidentally creating properties, you have two options: You can either create a proxy that traps set. Or you can make an object obj non-extensible via Object.preventExtensions(obj), which means that JavaScript doesn’t let you add new (own) properties to obj.

Negative array indices

Some array methods let you refer to the last element via -1, to the second-to-last element via -2, etc. For example:

    > ['a', 'b', 'c'].slice(-1)
    [ 'c' ]

Alas, that doesn’t work when accessing elements via the bracket operator ([]). We can, however, use proxies to add that capability. The following function createArray() creates arrays that support negative indices. It does so by wrapping proxies around array instances. The proxies intercept the get operation that is triggered by the bracket operator.

    function createArray(...elements) {
        let handler = {
            get(target, propKey, receiver) {
                let index = Number(propKey);
                // Sloppy way of checking for negative indices
                if (index < 0) {
                    propKey = String(target.length + index);
                return Reflect.get(target, propKey, receiver);
        // Wrap a proxy around an array
        let target = [];
        return new Proxy(target, handler);
    let arr = createArray('a', 'b', 'c');
    console.log(arr[-1]); // c

Acknowledgement: The idea for this example comes from a blog post by

Data binding

Data binding is about syncing data between objects. One popular use case are widgets based on the MVC (Model View Controler) pattern: With data binding, the view (the widget) stays up-to-date if you change the model (the data visualized by the widget).

To implement data binding, you have to observe and react to changes made to an object. In the following code snippet, I sketch how that could work for an array.

    let array = [];
    let observedArray = new Proxy(array, {
        set(target, propertyKey, value, receiver) {
            target[propertyKey] = value;



Data binding is a complex topic. Given its popularity and concerns over proxies not being performant enough, a dedicated mechanism has been created for data binding: Object.observe(). It will probably be part of ECMAScript 7 and is already supported by Chrome.

Consult Addy Osmani’s article “Data-binding Revolutions with Object.observe()” for more information on Object.observe().

Revocable references

Revocable references work as follows: A client is not allowed to access an important resource (an object) directly, only via a reference (an intermediate object, a wrapper around the resource). Normally, every operation applied to the reference is forwarded to the resource. After the client is done, the resource is protected by revoking the reference, by switching it off. Henceforth, applying operations to the reference throws exceptions and nothing is forwarded, anymore.

In the following example, we create a revocable reference for a resource. We then read one of the resource’s properties via the reference. That works, because the reference grants us access. Next, we revoke the reference. Now the reference doesn’t let us read the property, anymore.

    let resource = { x: 11, y: 8 };
    let {reference, revoke} = createRevocableReference(resource);
    // Access granted
    console.log(reference.x); // 11
    // Access denied
    console.log(reference.x); // TypeError: Revoked

Proxies are ideally suited for implementing revocable references, because they can intercept and forward operations. This is a simple proxy-based implementation of createRevocableReference:

    function createRevocableReference(target) {
        let enabled = true;
        return {
            reference: new Proxy(target, {
                get(target, propKey, receiver) {
                    if (!enabled) {
                        throw new TypeError('Revoked');
                    return Reflect.get(target, propKey, receiver);
                has(target, propKey) {
                    if (!enabled) {
                        throw new TypeError('Revoked');
                    return Reflect.has(target, propKey);
            revoke() {
                enabled = false;

The code can be simplified via the proxy-as-handler technique from the previous section. This time, the handler basically is the Reflect object. Thus, the get trap normally returns the appropriate Reflect method. If the reference has been revoked, a TypeError is thrown, instead.

    function createRevocableReference(target) {
        let enabled = true;
        let handler = new Proxy({}, {
            get(dummyTarget, trapName, receiver) {
                if (!enabled) {
                    throw new TypeError('Revoked');
                return Reflect[trapName];
        return {
            reference: new Proxy(target, handler),
            revoke() {
                enabled = false;

However, you don’t have to implement revocable references yourself, because ECMAScript 6 lets you create proxies that can be revoked. This time, the revoking happens in the proxy, not in the handler. All the handler has to do is forward every operation to the target. As we have seen that happens automatically if the handler doesn’t implement any traps.

    function createRevocableReference(target) {
        let handler = {}; // forward everything
        let { proxy, revoke } = Proxy.revocable(target, handler);
        return { reference: proxy, revoke };

Membranes build on the idea of revocable references: Environments that are designed to run untrusted code wrap a membrane around that code to isolate it and keep the rest of the system safe. Objects pass the membrane in two directions:

  • The code may receive objects from the outside.
  • Or it may hand objects to the outside.

In both cases, revocable references are wrapped around the objects. Objects returned by wrapped functions or methods are also wrapped.

Once the untrusted code is done, all of those references are revoked. As a result, none of its code on the outside can be executed anymore and outside objects that it has cease to work, as well. The Caja Compiler is “a tool for making third party HTML, CSS and JavaScript safe to embed in your website”. It uses membranes to achieve this task.

Other use cases

There are more use cases for proxies. For example:

  • Local placeholders that forward method invocations to remote objects. Similar: web service example.
  • Data access objects for databases: reading and writing to the object reads and writes to the database. Similar: web service example.
  • Profiling: Intercept method invocations to track how much time is spent in each method. Similar: tracing example.
  • Type checking: Nicholas Zakas has used proxies to type-check objects.

The design of the proxy API

In this section, we go deeper into how proxies work and why they work that way.

Stratification: keeping base level and meta level separate

Firefox has allowed you to do some interceptive meta programming for a while: If you define a method whose name is __noSuchMethod__, it is notified whenever a method is called that doesn’t exist. The following is an example of using __noSuchMethod__.

    let obj = {
        __noSuchMethod__: function (name, args) {
            console.log(name+': '+args);
    // Neither of the following two methods exist,
    // but we can make it look like they do;    // Output: foo: 1, 2); // Output: bar: 1,2

Thus, __noSuchMethod__ works similarly to a proxy trap. In contrast to proxies, the trap is an own or inherited method of the object whose operations we want to intercept. The problem with that approach is that base level and meta level are mixed. Base-level code may accidentally invoke or see a meta level method and there is the possibility of accidentally defining a meta level method.

Even in standard ECMAScript 5, base level and meta level are sometimes mixed. For example, the following meta programming mechanisms can fail, because they exist at the base level:

  • obj.hasOwnProperty(propKey): This call can fail if a property in the prototype chain overrides the built-in implementation. For example, it fails if obj is { hasOwnProperty: null }. Safe ways to call this method are, propKey) and its abbreviated version {}, propKey).
  •···), func.apply(···): For these two methods, problem and solution are the same as with hasOwnProperty.
  • obj.__proto__: In most JavaScript engines, __proto__ is a special property that lets you get and set the prototype of obj. Hence, when you use objects as dictionaries, you must be careful to avoid __proto__ as a property key.

By now, it should be obvious that making (base level) property keys special is problematic. Therefore, proxies are stratified – base level (the proxy object) and meta level (the handler object) are separate.

Virtual objects versus wrappers

Proxies are used in two roles:

  • As wrappers, they wrap their targets, they control access to them. Examples of wrappers are: revocable resources and tracing proxies.

  • As virtual objects, they are simply objects with special behavior and their targets don’t matter. An example is a proxy that forwards method calls to a remote object.

An earlier design of the proxy API conceived proxies as purely virtual objects. However, it turned out that even in that role, a target was useful, to enforce invariants (which is explained later) and as a fallback for traps that the handler doesn’t implement.

Transparent virtualization and handler encapsulation

Proxies are shielded in two ways:

  • It is impossible to determine whether an object is a proxy or not (transparent virtualization).
  • You can’t access a handler via its proxy (handler encapsulation).

Both principles give proxies considerable power for impersonating other objects. One reason for enforcing invariants (as explained later) is to keep that power in check.

If you do need a way to tell proxies apart from non-proxies, you have to implement it yourself. The following code is a module lib.js that exports two functions: one of them creates proxies, the other one determines whether an object is one of those proxies.

    // lib.js
    let proxies = new WeakSet();
    export function createProxy(obj) {
        let handler = {};
        let proxy = new Proxy(obj, handler);
        return proxy;
    export function isProxy(obj) {
        return proxies.has(obj);

This module uses the ECMAScript 6 data structure WeakSet for keeping track of proxies. WeakSet is ideally suited for this purpose, because it doesn’t prevent its elements from being garbage-collected.

The next example shows how lib.js can be used.

    // main.js
    import { createProxy, isProxy } from './lib.js';
    let p = createProxy({});
    console.log(isProxy(p)); // true
    console.log(isProxy({})); // false

The meta object protocol and proxy traps

This section examines how JavaScript is structured internally and how the set of proxy traps was chosen.

The term protocol is highly overloaded in computer science. One definition is:

A prototcol is about achieving tasks via an object, it comprises a set of methods plus a set of rules for using them.

Note that this definition is different from viewing protocols as interfaces (as, for example, Objective C does), because it includes rules.

The ECMAScript specification describes how to execute JavaScript code. It includes a protocol for handling objects. This protocol operates at a meta level and is sometimes called the meta object protocol (MOP). The JavaScript MOP consists of own internal methods that all objects have. “Internal” means that they exist only in the specification (JavaScript engines may or may not have them) and are not accessible from JavaScript. The names of internal methods are written in double square brackets.

The internal method for getting properties is called [[Get]]. If we pretend that property names with square brackets are legal, this method would roughly be implemented as follows in JavaScript.

    // Method definition
    [[Get]](propKey, receiver) {
        let desc = this.[[GetOwnProperty]](propKey);
        if (desc === undefined) {
            let parent = this.[[GetPrototypeOf]]();
            if (parent === null) return undefined;
            return parent.[[Get]](propKey, receiver); // (*)
        if ('value' in desc) {
            return desc.value;
        let getter = desc.get;
        if (getter === undefined) return undefined;
        return getter.[[Call]](receiver, []);

The MOP methods called in this code are:

  • [[GetOwnProperty]] (trap getOwnPropertyDescriptor)
  • [[GetPrototypeOf]] (trap getPrototypeOf)
  • [[Get]] (trap get)
  • [[Call]] (trap apply)

In line (*) you can see why proxies in a prototype chain find out about get if a property isn’t found in an “earlier” object: If there is no own property whose key is propKey, the search continues in the prototype parent of this.

Fundamental versus derived operations. You can see that [[Get]] calls other MOP operations. Operations that do that are called derived. Operations that don’t depend on other operations are called fundamental.

The MOP of proxies

The meta object protocol of proxies is different from that of normal objects. For normal objects, derived operations call other operations. For proxies, each operation is either intercepted by a handler method or forwarded to the target.

What operations should be interceptable via proxies? One possibility is to only provide traps for fundamental operations. The alternative is to include some derived operations. The advantage of derived traps is that they increase performance and are more convenient: If there wasn’t a trap for get, you’d have to implement its functionality via getOwnPropertyDescriptor. One problem with derived traps is that they can lead to proxies behaving inconsistently. For example, get may return a value that is different from the value stored in the descriptor returned by getOwnPropertyDescriptor.

Selective intercession: what operations should be interceptable?

Intercession by proxies is selective: you can’t intercept every language operation. Why were some operations excluded? Let’s look at two reasons.

First, stable operations are not well suited for intercession. An operation is stable if it always produces the same results for the same arguments. If a proxy can trap a stable operation, it can become unstable and thus unreliable. Strict equality (===) is one such stable operation. It can’t be trapped and its result is computed by treating the proxy itself as just another object. Another way of maintaining stability is by applying an operation to the target instead of the proxy. As explained later, when we look at how invariants are enfored for proxies, this happens when Object.getPrototypeOf() is applied to a proxy whose target is non-extensible.

A second reason for not making more operations interceptable is that intercession means executing custom code in situations where that normally isn’t possible. The more this interleaving of code happens, the harder it is to understand and debug a program.

Traps: “get” versus “invoke”

If you want to create virtual methods via ECMAScript 6 proxies, you have to return functions from a get trap. That raises the question: why not introduce an extra trap for method invocations (e.g. invoke)? That would enable us to distinguish between:

  • Getting properties via obj.prop (trap get)
  • Invoking methods via obj.prop() (trap invoke)

There are two reasons for not doing so.

First, not all implementations distinguish between get and invoke. For example, Apple’s JavaScriptCore doesn’t.

Second, extracting a method and invoking it later via call() or apply() should have the same effect as invoking the method via dispatch. In other words, the following two variants should work equivalently. If there was an extra trap invoke then that equivalence would be harder to maintain.

    // Variant 1: call via dynamic dispatch
    let result = obj.m();
    // Variant 2: extract and call directly
    let m = obj.m;
    let result =;

Only possible with invoke. Some things can only be done if you are able to distinguish between get and invoke. Those things are therefore impossible with the current proxy API. Two examples are: auto-binding and intercepting missing methods.

First, by making a proxy the prototype of an object obj, you can automatically bind methods:

  • Retrieving the value of a method m via obj.m returns a function whose this is bound to obj.
  • obj.m() performs a method call.

Auto-binding helps with using methods as callbacks. For example, variant 2 from the previous example becomes simpler:

    let boundMethod = obj.m;
    let result = boundMethod();

Second, invoke lets a proxy emulate the previously mentioned __noSuchMethod__ mechanism that Firefox supports. The proxy would again become the prototype of an object obj. It would react differently depending on how an unknown property foo is accessed:

  • If you read that property via, no intercession happens and undefined is returned.
  • If you make the method call then the proxy intercepts and, e.g., notifies a callback.

Enforcing invariants for proxies

Before we look at what invariants are and how they are enforced for proxies, let’s review how objects can be protected via non-extensibility and non-configurability.

Protecting objects

There are two ways of protecting objects:

  • non-extensibility protects objects
  • non-configurability protects properties (or rather, their attributes)

Non-extensibility. If an object is non-extensible, you can’t add properties and you can’t change its prototype:

    'use strict'; // switch on strict mode to get TypeErrors
    let obj = Object.preventExtensions({});
    console.log(Object.isExtensible(obj)); // false = 123; // TypeError: object is not extensible
    Object.setPrototypeOf(obj, null); // TypeError: object is not extensible

Non-configurability. All the data of a property is stored in attributes. A property is like a record and attributes are like the fields of that record. Examples of attributes:

  • The attribute value holds the value of a property.
  • The boolean attribute writable controls whether a property’s value can be changed.
  • The boolean attribute configurable controls whether a property’s attributes can be changed.

Thus, if a property is both non-writable and non-configurable, it is read-only and remains that way:

    'use strict'; // switch on strict mode to get TypeErrors
    let obj = {};
    Object.defineProperty(obj, 'foo', {
        value: 123,
        writable: false,
        configurable: false
    console.log(; // 123 = 'a'; // TypeError: Cannot assign to read only property
    Object.defineProperty(obj, 'foo', {
        configurable: true
    }); // TypeError: Cannot redefine property

For more details on these topics (including how Object.defineProperty() works) consult the following sections in “Speaking JavaScript”:

Enforcing invariants

Traditionally, non-extensibility and non-configurability are:

  • Universal: they work for all objects.
  • Monotonic: once switched on, they can’t be switched off again.

These and other characteristics that remain unchanged in the face of language operations are called invariants. With proxies, it is easy to violate invariants, as they are not intrinsically bound by non-extensibility etc.

The proxy API prevents proxies from violating invariants by checking the parameters and results of handler methods. Non-extensibility and non-configurability are enforced by using the target object for bookkeeping. The following are a few examples of invariants (for an arbitrary object obj) and how they are enforced for proxies (an exhaustive list is given at the end of this post):

  • Invariant: Object.isExtensible(obj) must return a boolean.
    • Enforced by coercing the value returned by the handler to a boolean.
  • Invariant: Object.getOwnPropertyDescriptor(obj, ···) must return an object or undefined.
    • Enforced by throwing a TypeError if the handler doesn’t return an appropriate value.
  • Invariant: If Object.preventExtensions(obj) returns true then all future calls must return false and obj must now be non-extensible.
    • Enforced by throwing a TypeError if the handler returns true, but the target object is not extensible.
  • Invariant: Once an object has been made non-extensible, Object.isExtensible(obj) must always return false.
    • Enforced by throwing a TypeError if the result returned by the handler is not the same (after coercion) as Object.isExtensible(target).

Enforcing invariants has the following benefits:

  • Proxies work like all other objects with regard to extensibility and configurability. Therefore, universality is maintained. This is achieved without preventing proxies from virtualizing (impersonating) protected objects.
  • A protected object can’t be misrepresented by wrapping a proxy around it. Misrepresentation can be caused by bugs or by malicious code.

The following sections give examples of invariants being enforced.

Example: the prototype of a non-extensible target must be represented faithfully

In response to the getPrototypeOf trap, the proxy must return the target’s prototype if the target is non-extensible.

To demonstrate this invariant, let’s create a handler that returns a prototype that is different from the target’s prototype:

    let fakeProto = {};
    let handler = {
        getPrototypeOf(t) {
            return fakeProto;

Faking the prototype works if the target is extensible:

    let extensibleTarget = {};
    let ext = new Proxy(extensibleTarget, handler);
    console.log(Object.getPrototypeOf(ext) === fakeProto); // true

We do, however, get an error if we fake the prototype for a non-extensible object.

    let nonExtensibleTarget = {};
    let nonExt = new Proxy(nonExtensibleTarget, handler);
    Object.getPrototypeOf(nonExt); // TypeError
Example: non-writable non-configurable target properties must be represented faithfully

If the target has a non-writable non-configurable property then the handler must return that property’s value in response to a get trap. To demonstrate this invariant, let’s create a handler that always returns the same value for properties.

    let handler = {
        get(target, propKey) {
            return 'abc';
    let target = Object.defineProperties(
        {}, {
            foo: {
                value: 123,
                writable: true,
                configurable: true
            bar: {
                value: 456,
                writable: false,
                configurable: false
    let proxy = new Proxy(target, handler);

Property is not both non-writable and non-configurable, which means that the handler is allowed to pretend that it has a different value:


However, property is both non-writable and non-configurable. Therefore, we can’t fake its value:

    TypeError: Invariant check failed

Reference: the proxy API

This section serves as a quick reference for the proxy API: the global objects Proxy and Reflect.

Creating proxies

There are two ways to create proxies:

  • proxy = new Proxy(target, handler)
    Creates a new proxy object with the given target and the given handler.

  • {proxy, revoke} = Proxy.revocable(target, handler) Creates a proxy that can be revoked via the function revoke. revoke can be called multiple times, but only the first call has an effect and switches proxy off. Afterwards, any operation performed on proxy leads to a TypeError being thrown.

Handler methods

This subsection explains what traps can be implemented by handlers and what operations trigger them. Several traps return boolean values. For the traps has and isExtensible, the boolean is the result of the operation. For all other traps, the boolean indicates whether the operation succeeded or not.

Traps for all objects:

  • defineProperty(target, propKey, propDesc)boolean
    • Object.defineProperty(proxy, propKey, propDesc)
  • deleteProperty(target, propKey)boolean
    • delete proxy[propKey]
    • delete // propKey = 'foo'
  • enumerate(target)Iterator
    • for (x in proxy) ···
  • get(target, propKey, receiver)any
    • receiver[propKey]
    • // propKey = 'foo'
  • getOwnPropertyDescriptor(target, propKey)PropDesc|Undefined
    • Object.getOwnPropertyDescriptor(proxy, propKey)
  • getPrototypeOf(target)Object|Null
    • Object.getPrototypeOf(proxy)
  • has(target, propKey)boolean
    • propKey in proxy
  • isExtensible(target)boolean
    • Object.isExtensible(proxy)
  • ownKeys(target)Array<PropertyKey>
    • Object.getOwnPropertyPropertyNames(proxy)
    • Object.getOwnPropertyPropertySymbols(proxy)
    • Object.keys(proxy)
  • preventExtensions(target)boolean
    • Object.preventExtensions(proxy)
  • set(target, propKey, value, receiver)boolean
    • receiver[propKey] = value
    • = value // propKey = 'foo'
  • setPrototypeOf(target, proto)boolean
    • Object.setPrototypeOf(proxy, proto)

Traps for functions (available if target is a function):

  • apply(target, thisArgument, argumentsList)any
    • proxy.apply(thisArgument, argumentsList)
    •, ...argumentsList)
    • proxy(...argumentsList)
  • construct(target, argumentsList)Object
    • new proxy(..argumentsList)
Fundamental operations versus derived operations

The following operations are fundamental, they don’t use other operations to do their work: apply, defineProperty, deleteProperty, getOwnPropertyDescriptor, getPrototypeOf, isExtensible, ownKeys, preventExtensions, setPrototypeOf

All other operations are derived, they can be implemented via fundamental operations. For example, for data properties, get can be implemented by iterating over the prototype chain via getPrototypeOf and calling getOwnPropertyDescriptor for each chain member until either an own property is found or the chain ends.


Invariants are safety constraints for handlers. This subsection documents what invariants are enforced by the proxy API and how. Whenever you read “the handler must do X” below, it means that a TypeError is thrown if it doesn’t. Some invariants restrict return values, others restrict parameters. Ensuring the correct return value of a trap is ensured in two ways: Normally, an illegal value means that a TypeError is thrown. But whenever a boolean is expected, coercion is used to convert non-booleans to legal values.

This is the complete list of invariants that are enforced (source: ECMAScript 6 specification):

  • apply(target, thisArgument, argumentsList)
    • No invariants are enforced.
  • construct(target, argumentsList)
    • The result returned by the handler must be an object (not null or a primitive value).
  • defineProperty(target, propKey, propDesc)
    • If the target is not extensible then propDesc can’t create a property that the target doesn’t already have.
    • If propDesc sets the attribute configurable to false then the target must have a non-configurable own property whose key is propKey.
    • If propDesc was used to (re)define an own property for the target then that must not cause an exception. An exception is thrown if a change is forbidden by the attributes writable and configurable.
  • deleteProperty(target, propKey)
    • Non-configurable own properties of the target can’t be deleted.
  • enumerate(target)
    • The handler must return an object.
  • get(target, propKey, receiver)
    • If the target has an own, non-writable, non-configurable data property whose key is propKey then the handler must return that property’s value.
    • If the target has an own, non-configurable, getter-less accessor property then the handler must return undefined.
  • getOwnPropertyDescriptor(target, propKey)
    • The handler must return either an object or undefined.
    • Non-configurable own properties of the target can’t be reported as non-existent by the handler.
    • If the target is non-extensible then exactly the target’s own properties must be reported by the handler as existing (and none of them as missing).
    • If the handler reports a property as non-configurable then that property must be a non-configurable own property of the target.
    • If the result returned by the handler were used to (re)define an own property for the target then that must not cause an exception. An exception is thrown if the change is not allowed by the attributes writable and configurable. Therefore, the handler can’t report a non-configurable property as configurable and it can’t report a different value for a non-configurable non-writable property.
  • getPrototypeOf(target)
    • The result must be either an object or null.
    • If the target object is not extensible then the handler must return the prototype of the target object.
  • has(target, propKey)
    • A handler must not hide (report as non-existent) a non-configurable own property of the target.
    • If the target is non-extensible then no own property of the target may be hidden.
  • isExtensible(target)
    • The result returned by the handler is coerced to boolean.
    • After coercion to boolean, the value returned by the handler must be the same as target.isExtensible().
  • ownKeys(target)
    • The handler must return an object, which treated as array-like and converted into an array.
    • Each element of the result must be either a string or a symbol.
    • The result must contain the keys of all non-configurable own properties of the target.
    • If the target is not extensible then the result must contain exactly the keys of the own properties of the target (and no other values).
  • preventExtensions(target)
    • The result returned by the handler is coerced to boolean.
    • If the handler returns a truthy value (indicating a successful change) then target.isExtensible() must be false afterwards.
  • set(target, propKey, value, receiver)
    • If the target has an own, non-writable, non-configurable data property whose key is propKey then value must be the same as the value of that property (i.e., the property can’t be changed).
    • If the target has an own, non-configurable, setter-less accessor property then a TypeError is thrown (i.e., such a property can’t be set).
  • setPrototypeOf(target, proto)
    • The result returned by the handler is coerced to boolean.
    • If the target is not extensible, the prototype can’t be changed. This is enforced as follows: If the target is not extensible and the handler returns a truthy value (indicating a successful change) then proto must be the same as the prototype of the target. Otherwise, a TypeError is thrown.

The prototype chain

The following operations of normal objects perform operations on objects in the prototype chain (source: ECMAScript 6 specification). Therefore, if one of the objects in that chain is a proxy, its traps are triggered. The specification implements the operations as internal own methods (that are not visible to JavaScript code). But in this section, we pretend that they are normal methods that have the same names as the traps. The parameter target becomes the receiver of the method call.

  • target.enumerate()
    Traverses the prototype chain of target via getPrototypeOf. Per object, it retrieves the keys via ownKeys and examines whether a property is enumerable via getOwnPropertyDescriptor.
  • target.get(propertyKey, receiver)
    If target has no own property with the given key, get is invoked on the prototype of target.
  • target.has(propertyKey)
    Similarly to get, has is invoked on the prototype of target if target has no own property with the given key.
  • target.set(propertyKey, value, receiver)
    Similarly to get, set is invoked on the prototype of target if target has no own property with the given key.

All other operations only affect own properties, they have no effect on the prototype chain.


The global object Reflect implements all interceptable operations of the JavaScript meta object protocol as methods. The names of those methods are the same as those of the handler methods, which, as we have seen, helps with forwarding operations from the handler to the target.

  • Reflect.apply(target, thisArgument, argumentsList)any
    Better version of Function.prototype.apply().
  • Reflect.construct(target, argumentsList)Object
    The new operator as a function.
  • Reflect.defineProperty(target, propertyKey, propDesc)boolean
    Similar to Object.defineProperty().
  • Reflect.deleteProperty(target, propertyKey)boolean
    The delete operator as a function.
  • Reflect.enumerate(target)Iterator
    Returns an iterater over all enumerable string property keys of target. In other words, the iterator returns all values that the for-in loop would iterate over.
  • Reflect.get(target, propertyKey, receiver?)any
    A function that gets properties.
  • Reflect.getOwnPropertyDescriptor(target, propertyKey)PropDesc|Undefined
    Same as Object.getOwnPropertyDescriptor().
  • Reflect.getPrototypeOf(target)Object|Null
    Same as Object.getPrototypeOf().
  • Reflect.has(target, propertyKey)boolean
    The in operator as a function.
  • Reflect.isExtensible(target)boolean
    Same as Object.isExtensible().
  • Reflect.ownKeys(target)Array<PropertyKey>
    Returns all own property keys (strings and symbols!) in an array.
  • Reflect.preventExtensions(target)boolean
    Similar to Object.preventExtensions().
  • Reflect.set(target, propertyKey, value, receiver?)boolean
    A function that sets properties.
  • Reflect.setPrototypeOf(target, proto)boolean
    The new standard way of setting the prototype of an object. The current non-standard way that works in most engines is to set the special property __proto__.

Several methods have boolean results. For has and isExtensible, they are the results of the operation. For the remaining methods, they indicate whether the operation succeeded.

Apart from forwarding operations, why is Reflect useful [4]?

  • Different return values: Reflect duplicates the following methods of Object, but its methods return booleans indicating whether the operation succeeded (where the Object methods return the object that was modified).
    • Object.defineProperty(obj, propKey, propDesc)Object
    • Object.preventExtensions(obj)Object
    • Object.setPrototypeOf(obj, proto)Object
  • Operators as functions: The following Reflect methods implement functionality that is otherwise only available via operators:
    • Reflect.construct(target, argumentsList)Object
    • Reflect.deleteProperty(target, propertyKey)boolean
    • Reflect.get(target, propertyKey, receiver?)any
    • Reflect.has(target, propertyKey)boolean
    • Reflect.set(target, propertyKey, value, receiver?)boolean
  • The for-in loop as an iterator: This is rarely useful, but if you need it, you can get an iterator over all enumerable (own and inherited) string property keys of an object.
    • Reflect.enumerate(target)Iterator
  • Shorter version of apply: The only safe way to invoke the built-in function method apply is via, thisArg, args) (or similar). Reflect.apply(func, thisArg, args) is cleaner and shorter.

State of implementations

As usual, Kangax’ ES6 compatibility table is the best way of finding out how well engines support proxies. As of December 2014, Internet Explorer has the most complete support and Firefox supports some of the API (caveats: get doesn’t work properly, getPrototypeOf is not supported yet and Reflect is empty). No other browser or engine currently supports proxies.


This concludes our in-depth look at the proxy API. For each application, you have to take performance into consideration and – if necessary – measure. Proxies may not always be fast enough. On the other hand, performance is often not crucial and it is nice to have the meta programming power that proxies give us. As we have seen, there are numerous use cases they can help with.


Thanks go to Tom Van Cutsem: his paper [1] is the most important source of this blog post and he kindly answered questions about the proxy API that I had.

Technical Reviewers:

Further reading

  1. On the design of the ECMAScript Reflection API” by Tom Van Cutsem and Mark Miller. Technical report, 2012.
  2. The Art of the Metaobject Protocol” by Gregor Kiczales, Jim des Rivieres and Daniel G. Bobrow. Book, 1991.
  3. Putting Metaclasses to Work: A New Dimension in Object-Oriented Programming” by Ira R. Forman and Scott H. Danforth. Book, 1999.
  4. Harmony-reflect: Why should I use this library?” by Tom Van Cutsem.
Thanks to readers, $1,044 go to kids in Ferguson!
Last Tuesday, I’ve started a little Thanksgiving charity drive to get some money together for kids in Ferguson. I couldn’t be happier that I’ve sold $1,044 worth of my book with 100% of this going to Donors Choose projects in Ferguson, MO. (I’m coming up for the pa

Last Tuesday, I’ve started a little Thanksgiving charity drive to get some money together for kids in Ferguson. I couldn’t be happier that I’ve sold $1,044 worth of my book with 100% of this going to Donors Choose projects in Ferguson, MO. (I’m coming up for the payment processor fees out of my own pocket, so all the $1,044 go directly to the kids!)

My wife and I are big proponents of supporting literacy—it’s the foundation on which all education, learning and communication with people that you can’t directly talk to is based. If you don’t start to read at an early age, chances are that you never get into it.

Unfortunately as a society we seem more obsessed about hate and fear than supporting those who can’t help themselves. Children are at the receiving end of racism and institutionalized blaming and shaming of minorities. No money for education but buying tanks for the police is just one of the many symptoms of this.

While it’s only a small gesture, we have to start somewhere. Consider regularly giving money and/or supporting local kids. There’s more that you can do than you can think of.

Here’s the projects fully or partially funded with the purchases:

In the interest of transparency, here’s 1) the Proof of donating to Donors Choose and 2) an anonymized CSV tally of all sales from Tuesday, November 25, up to Monday, December 1.

Thanks again!

What's next for X-Tag project

Many things happened since Mozilla first announced its solution to bring Web Components capabilities to all modern browsers.

To continue our interview series we invited Daniel Buchner, creator of the X-Tag library, to explain how everything started and what's coming next.

Many things happened since Mozilla first announced its solution to bring Web Components capabilities to all modern browsers.

To continue our interview series we invited Daniel Buchner, creator of the X-Tag library, to explain how everything started and what's coming next.

Two years ago you made your first talk presenting X-Tag at Mozilla. What were your motivations to build it?

My motivation for writing X-Tag was two fold:

I. Create a polyfill for the Custom Elements spec. I saw this spec as the real foundation of Web Components - the other specs enhance the guts (Shadow DOM, Templates) and distribution (Imports) of custom elements.

II. I saw the Custom Elements API as a raw canvas that provided awesome lifecycle hooks and prototype definition capabilities, but lacked the features and affordances to solve the "80% case" for the development of robust, app-centric elements. I wanted to create a small library that would fill these gaps and make Custom Element development even easier for folks.

How hard was to create a framework based on a constantly changing set of specs?

It wasn't all that difficult working with a changing spec/implementation, primarily because we quickly came to the conclusion that we would focus on the library, and collaborate with Google on a single, shared polyfill.

This allowed us to run fast while still contributing to the spec development effort. I imagine change tracking of the specs and W3 conversations would have been more difficult if I wasn't directly involved in the Working Group. As I try to imagine the process with the eyes of a developer on the periphery, we could have been a little better at broadcasting changes, but that's more of a general, W3 process point, not a critique of any specific Working Group.

Are there plans to use Web Components inside of Firefox OS? What do you think is the future for the Brick project?

I know Firefox OS developers were eager to use Web Components, I believe they were waiting for the specs to land in Gecko before converting production FxOS code to use them. As far as Brick is concerned, after a few pivots, they are now making decisions about the direction/future of the project.

A couple of months ago you left Mozilla to join Target. How do you see the future of X-Tag now? Do you have plans to keep maintaining it? Are you planning to bring Web Components to Target?

I left Mozilla in April, and soon after the other major X-Tag developer, Arron Schaar, left Mozilla for a start-up. We both still actively work on X-Tag, and we just published a 1.0 release this November (2014).

We are also in the process of moving our docs to Greg Koberger's excellent, and dramatically expanding code coverage. While working on other projects, I have started assembling a set of app-centric elements we intend to release around the end of the year, in a UI library named X-UI. X-UI will be a set of custom elements that only rely on the Custom Elements API (polyfilled or native).

If you're already using X-Tag or Polymer, you're set - just grab the elements you need and go go go!


HTML5 Rocks ( Feed )
Monday, 01 December 2014
Introduction to Service Worker: How to use Service Worker
Service Worker will revolutionize the way we build for the web. Learn about what it is, why it is important and how to use it.
Service Worker will revolutionize the way we build for the web. Learn about what it is, why it is important and how to use it.
Get retinafied and support kids in need!
Up to and including Monday December 1, 100% of sales of my Retina Web ebook will go to Donors Choose projects in Ferguson, MO. These kids need our help! Thank you! Get your copy now!

Up to and including Monday December 1, 100% of sales of my Retina Web ebook will go to Donors Choose projects in Ferguson, MO.


These kids need our help! Thank you!

Get your copy now!

pluto/1.3.2 - Ruby/2.0.0 (2014-11-13/x86_64-linux) on Rails/4.2.0 (production)