Wednesday 25 January 2012

Now, Using CDN-Power!

Using a CDN has been on my list for some time now, and I've finally gotten around to configuring one. I just changed the tracking snippet to use Amazon's CloudFront CDN, ensuring that the tracking snippet is edge-cached at over 25 locations world-wide, in North America, South America, Europe and Asia. This means that your users will get the least possible delay in loading Errorception's tracking code. Your users get blazing high performance by default! And boy, do we at Errorception love high performance. :)

This change is being rolled out all of Errorception's users, whether on the free trial or on any of the paid plans, at no extra cost.

Action required

Unfortunately, to use the CDN you will have to change your tracking snippet in your site. Log in, and go over to Settings > Tracking Snippet to get your new tracking snippet, and replace the old tracking snippet in your site with the new one. I know it's a chore — I apologize for the inconvenience. However, the benefits far outweigh the little trouble.

While the old tracking snippet will continue to work for some more time, I urge you to upgrade soon so that you can make the most of the CDN. At some point in the not-too-distant future, the old tracking snippet will not be supported anymore.

Tuesday 17 January 2012

Writing Quality Third-Party JS - Part 3: Planning for an API

In the first post in this series, I wrote about the fact that you don't own the page, and how that affects even little things in your code. In the second post in the series, I dived into a good deal of detail about how to bootstrap your code. In this third and final part of the series, I'll talk about how to make an API available to your users, and how to communicate with your server.

This is a three-part series of articles aimed at people who want to write JavaScript widgets (or other kinds of scripts) to make their application's data/services/UI available on other websites. This is the third in the series, discussing means to making an API available to your users, and communicating with your server.

So you want to provide an API

Why is this such a big deal? I mean, look at jQuery, or any library for that matter, right? You add a script tag to the page, and you have the API ready to use, right? How is this any different for a third-party script?

It's different depending on the way you choose to load your code. In Part 2: Loading Your Code, we went over why your code should be loaded asynchronously. We'll get back to that in a moment. Let's first consider the really bad case when your loading strategy requires that your code is loaded synchronously — Twitter's @Anywhere API, for example.

Again, to clarify, I'm sure Twitter has got their reasons to do what they do. I'm still going to call it bad because it is something you should not do.

So, from Twitter's examples:

This is really the simplest way to define an API. It's similar to how JS libraries, like jQuery, work. Add a script tag, and then use the library's code. Simple enough. Except, for reasons explained in Part 2, we should load our code in an asynchronous fashion.

Asynchronous loading

If your code is loaded in an asynchronous fashion, it means that the browser's parser will continue parsing and evaluating other script tags on the page before your own script is evaluated. This is great of course — it means that you have stepped out of the way when loading the page. Unfortunately, this also means that if someone attempts to access your API namespace, they might get errors, depending on whether your code has been downloaded and evaluated already or not. You now need a way of signaling to your API consumer that your code is ready to use. Facebook's documentation talks about how they do this, from the API consumer's perspective:

What Facebook requires you to do is to declare a global function called fbAsyncInit. When Facebook's JS SDK has loaded, it checks for the existence of this function in the global object (window), and then simply calls it if it exists. In fact, here are the relevant lines from the SDK that call this function:

The hasRun is irrelevant to us — it's an internal detail to Facebook. But note how they explictly check if the function exists, and if it does, they call it. I've removed an outer wrapper here for clarity — The code above is called in a setTimeout(0) to ensure that it's at the end of the execution stack. Chances are, you'd want to wait till the end of the execution stack too. You could either wait explicitly till the end, or fire it off as a setTimeout, like Facebook does.

To drive home the point, the flow works as follows:

  • Before starting to load the code, ask the user to define a global function that should be called when the third-party code has finished loading.
  • Start loading the third-party code asynchronously.
  • In the third-party code, when the code has finished loading, call the global function if it has been defined.

There are minor variations one can make on the pattern above. One I'd like to make is that of removing the need for a global function. I'd agree that it's a bit nit-picky to remove just one global variable, but take it for what it's worth.

The improvement is that fbAsyncInit has been replaced by FB.onReady. This removes the need for the global function definition. Also, when the FB's SDK loads, it will continue to use the FB namespace, so no more globals are created.

For APIs that are as complex as Facebook's, I think this is the best that can be done without adding more complexity. There are many more interesting things that can be done if you are willing to embrace say AMD support, but this is the very least required.

Other uses of having a predefined global

There are other uses of having a predefined global. For example, it could house all your initialization parameters. In FB's case, it might need the API key before making any API calls. This could be configured as members in the FB object, even before the API is actually loaded.

But Facebook is complex

Sometimes, APIs are not as rich as Facebook's. Facebook's API is rich, and allows for both reads and writes. Usually, widgets/snippets are much simpler than that.

Write-only APIs

This requires special mention, since the most frequently used third-party APIs are usually invisible, write-only APIs (ref: end of this post). Take Google Analytics for example. It only collects data from the page, and posts them to Google's servers. It doesn't read anything from Google's servers - it doesn't need to. The API is a classic write-only API. In such a case, initialization can be simplified drastically. In fact, this is what Errorception itself does too — admittedly blatantly copying the technique from GA.

If you follow this technique, you don't need a global onReady function to be defined, since it is immaterial to know when the code has loaded. All you need a queue that needs to be flushed to the server. This queue can be maintained as an array of ordered items. Since arrays already have a .push method on them, that is your API method! It's that simple!

So, both Google Analytics, and learning from it, Errorception, have a .push method on their only global variable (_gaq / _errs), because this global variable is essentially just a regular array! See how it's set up:

This doesn't stop you from doing what you'd expect a decent write API to do. For example, GA let's you do a bunch of configuration and record custom data, all using just the .push method.

In Errorception's case, we are recording JS errors, and despite the need to load our code late and asynchronously, errors must be caught as early as possible. So, we start populating this queue as early as possible. Our embed code itself does this, using the window.onerror event.

This way, errors are caught very early in the page lifecycle, without compromising performance one bit. Once our code has loaded, we simply start processing this queue to flush it to the server. Once the queue has been completely flushed for the first time, we simply redefine the global _errs object to now be an object (instead of an array), with just one .push method on it. So, all existing push calls will continue to work, and when push is called we can directly trigger internal code. This might break if someone has got a reference to the _errs object, and I decide to change the global. An alternative would be to leave the array untouched, and to poll the array to check for new members. Since polling just seems inefficient, and at the moment I don't have a public API anyway, I opted for redefining .push.

It's hard to read the minified code from Google Analytics, but it appears that Google does the exact same thing. They too seem to be redefining the global _gaq to add a .push that points to an internal method.

Communicating with your server

There are established ways to bypass the browser same-origin policy and communicate with remote servers across domains. Though usually considered to be hacks, there's no way around them in third-party scripts. Let's quickly round-up the most common techniques.

Make an image request with a query string

This is the technique Google Analytics uses. When the queue is flushed, the data is encoded into query strings, and an Image object is created, to load a image with the data passed in as a query string. Though the server responds with a simple enough 1x1 gif since it has to play well with the page, the query string data is recorded as what the client had to say.

Pros: Simple, non-obtrusive since the DOM is not affected. (The image need not be appended to the DOM.) Works everywhere.
Cons: Ideal only for client-to-server communication. Not the best way to have a two-way communication. Have to consider URL length limits. Can only use HTTP GETs.

JSON-P

You can alternatively create a JSON-P request. This is in essence similar to the image technique above, except that instead of creating a image object, we create a script tag. It comes with the down-side that we'll be creating script tags each time we want to tell the server something (and hence we'll have to aggressively clean up), but also has the upside that we have two-way communication since we can listen to what the server has to say.

Pros: Still simple. Two way communication, since the server can respond meaningfully. Works everywhere. Excellent when you want to read data from the server.
Cons: Still have to consider URL length limits. Only HTTP GETs. Requires DOM cleanup.

Posting in hidden iframes

This is the technique Errorception uses. In this method, we create a hidden iframe, post a form to that iframe, wait for the iframe to finish loading, then destroy the iframe. The data is sent to the server as POST parameters, but the response is not readable due to the domains not matching any more after the iframe has been POSTed.

Pros: Simple. HTTP semantics respected. Works everywhere. URL length limits don't apply.
Cons: Only client-to-server communication. Requires DOM cleanup.

CORS

Errorception will soon be moving to CORS while maintaining the iframes approach for backwards compatibility. The benefits over the iframe based approach for us is that there is no DOM cleanup required. Another obvious benefit is that you can read the response of the request as well, though this is not very critical for the fire-and-forget nature of write APIs like Errorception's.

Pros: Full control on HTTP semantics. No data size limits. No DOM alterations. Cons: Only works in newer browsers, hence must be supported by another method for the time being.

More elaborate hacks

Hacks can get pretty elaborate, of course. I had discussed on my personal blog a couple of years ago how we can use nested iframes to devise a ugly but workable method of establishing read-write communication. This was at a time when CORS wasn't around yet. Even today, you'd need this mechanism to deal with older browsers. This should be superseded by CORS now, though. Facebook still uses this as a fallback for older browsers, as do other read-write APIs like Google Calendar.

Wrapping up

This has been one massive article series! We've very quickly covered several topics to do with creating high quality third-party JS. This gives a decent birds-eye-view of the factors to consider, and possible solutions to problems, if you want to act like a well behaved citizen on the page. There's so much more details we can get into, but I have to draw the line somewhere. :)

Do let me know what you think in the comments. Have you come across other problems in writing your own third-party JS? What fixes did you use? I'll be only glad to know more.

Wednesday 11 January 2012

Writing Quality Third-Party JS - Part 2: Loading Your Code

In the previous post Writing Quality Third-Party JS - Part 1: The First Rule, I wrote about the fact that you don't own the page, and how that affects even little things in your code. With that background, this post focuses on how to load your code in the host page.

This is a three-part series of articles aimed at people who want to write JavaScript widgets (or other kinds of scripts) to make their application's data/services/UI available on other websites. This is the second in the series, tackling the topic of loading your code in the host page.

Bootstrapping

Remember how in my last post I kept harping on that you don't own the page? It applies here too. Two factors to consider are:

  • Can the network performance effects of downloading your code be eliminated completely?
  • If your servers ever go down, will it affect the host page at all?

What's wrong with a <script> tag?

I won't go into too much detail here since it has been written about in enough detail elsewhere, but just to quickly summarize: a script tag will block download of resources on your page AND will block the rendering of the page. It's what Steve Souders calls the Frontend Single Point of Failure. You don't want to be a point of failure.

It surprises me then, that Twitter not only requires you to include a vanilla script tag, but also that you should put it in the <head> of your page. This is despite it being common knowledge at the time of the launch of their @Anywhere APIs that this is a bad practice. In fact, on the contrary, they say in their docs:

As a best practice always place the anywhere.js file as close to the top of the page as possible.

Now, I don't mean to make them look bad — they are smart people and know what they are doing. What I'm saying is that I can't think of any reason to do this. They claim that it's for a better experience with OAuth 2.0 authentication, but I'm not convinced. I'd recommend that you do not use Twitter's model as an example for how to load your script. There are much better mechanisms available.

To be fair, even Google Analytics was doing something very similar until recently, but at least they asked for the script tag to be before the closing </body> tag. They've since deprecated this technique anyway.

Eliminating the performance hit

Realize that my first point above isn't about reducing the performance cost, but about eliminating it. Techniques for reducing the performance hit are already well known — caching and expiry, CDNs, etc. That said, how much ever you reduce the performance hit, there is still a performance hit anyway. How can this be eliminated?

async FTW?

HTML(5) introduced the async attribute, which instructs the browser's parser-renderer to load the file asynchronously, while continuing to render the page as usual. This is a life-saver, but unfortunately not good enough for use just yet.

Why isn't it good enough? Because it isn't supported in older browsers, including IE9 and lower. Considering that IE is a large part of the browser market, and as of right now IE10 isn't in lay-people's hands yet, you will need to do better than slap on an async attribute on your script tag. The async attribute is amazing, just not ready yet.

Dynamic script tag creation

Another way to load your code is to create a script tag dynamically. Nicholas Zakas has gone into details about this technique in a blog post, so I won't mention it all here. I'll just show his code here instead:

Seems simple enough. He uses DOM methods to create a script tag, and then appends it to the page. As Souders has explained, this technique causes the script to be downloaded immediately, but asynchronously. This is usually what you want. In fact, this is the approach Facebook takes to load their JavaScript SDK, though they append to the head instead of the body, which also has the same effect.

There's a minor improvement that has been common knowledge since some time now. Remember in my previous post I said that you cannot make any assumptions about the page? The snippets above assume that document.body or document.getElementsByTagName("head")[0] reliably exists for use. Turns out, that assumption might be wrong. So, the minor improvement is to not depend on document.body or the head. Instead, only assume that at least one script tag exists on the page. This assumption is always right, since that's how your code is running in the first place. So, instead of appending to document.body, do what Google does:

The code inside the self-executing anonymous function immediately invoked function (hat-tip) creates the script tag, and then appends it as a sibling of the first script tag on the page. It doesn't matter which the first script tag is — we rely on the fact that a script tag surely exists on the page. We also rely on the fact that the script tag has a parentNode. Turns out, this is a safe assumption to make.

Both these scripts set script.async = true;. I'm unsure why they do this. Firefox's documentation implies that this is not necessary in Firefox, IE, or Webkit:

In Firefox 4.0, the async DOM property defaults to true for script-created scripts, so the default behavior matches the behavior of IE and WebKit.

I guess there's no harm done in setting the async flag to true, so everyone just does it. Either that, or I simply don't know. As pointed by Steve Souders in a comment below, async=true is required for some edge-case browsers, including Firefox 3.0.

But when does the download happen?

All browsers that implement async start the download immediately and asynchronously. This means that your script will be parsed and executed at some time in the future. The question I had at this point was: At what point in the page load cycle does my script get evaluated? Does onload get blocked? This addresses my second point at the top of the post. There might be situations when my script is unreachable due to either server or network problems, and I didn't want Errorception to affect any other scripts on the page in such a situation.

So, I ran quick tests to find the answer, and unsurprisingly the answer is you can't be sure. Just today, Steve Souders published a browser-scope test page that tests just this behavior. It looks like older implementations of async were indeed holding back page load events. This seems to be getting phased out in newer versions of browsers. If I were writing the tracking snippet today, I would probably have used the technique mentioned above.

Instead, I decided to do slightly better than these techniques while coding the Errorception tracking script. I decided that I'll explicitly wait for the page onload to fire first, and only then include the Errorception tracking script. The tracking snippet in Errorception's case looks as follows:

The snippet above completely gets out of the way during the page load process, hence ensuring that we do not depend on browser behavior of event firing sequences when using the async attribute. It has an additional minor benefit that the end user's bandwidth is completely available for loading up the site's intended resources first before loading the Errorception tracking code. Effectively, it prioritizes the site's code and content over Errorception's own code. This fits in perfectly with Errorception's philosophy of not affecting page load time at all. Depending on the type of connection the end-user has, this bandwidth-prioritization technique might have merits.

I'll be the first to admit that what Errorception is doing might not be for everyone. Take Facebook for example. If you are including their API, chances are, you want to do something with it. Delaying the loading will only deteriorate the end-user experience. However, in Errorception's case, there's no such interaction that will ever block, so this works out fine.

I initially thought I'll discuss about APIs as well in this post, but the post has seemed to run too long already. I'll save that bit for the next post.

Errorception is a ridiculously easy-to-use JavaScript error tracking system, designed for high performance and high reliability. There's a free trial so you can give it a spin. Find out more »

In Part 3…

In the next post in the series, I'll discuss mechanisms to signal the availability of your APIs to your developers considering that your code is downloaded at some arbitrary time in the future, and methods of communicating with your server by bypassing the same-origin policy.

Monday 9 January 2012

Writing Quality Third-Party JS - Part 1: The First Rule

It's fascinating how JavaScript has quickly become to de-facto mechanism to deliver third-party integrations that are easily pluggable into people's websites. Services like Facebook, Twitter and Disqus, programmable widgets like Google Maps, and even invisible scripts like Google Analytics, KissMetrics and our own Errorception give you ways to integrate their services with your website.

You'd think that with these mechanisms becoming so popular, there'd be a good deal of information available about how to build great third-party integrations in JavaScript. Turns out, the information available on the Interwebs is actually rather sparse. This series of posts aims to add to that repository of knowledge, based on my experience building Errorception.

This is a three-part series of articles aimed at people who want to write JavaScript widgets (or other kinds of scripts) to make their application's data/services/UI available on other websites. This is the first in the series, highlighting important considerations.

The First Rule

The First Rule of Third-Party JavaScript is... man, this will never sound as epic as Tyler Durden. Anyway, here's the first rule, and the most important consideration:

The Rule: You DO NOT Own The Page

Understanding and assimilating this rule gives you two principle considerations when designing your script.

  • The impact of adding your script should be minimal. Preferably none.
  • You cannot make any assumptions about how the page is coded.

Let's start with the first point. How can you make the impact of your script minimal? A couple of considerations come to mind.

No globals

You should ideally have no global variables in your code. Making sure this happens is rather simple. Firstly, ensure that you enclose your code in a self-executing anonymous function. Secondly, pass your code through a good lint tool to ensure that you've not used any undeclared variables, since undeclared variables will cause implicit global variables. This is also regarded as a general best-practice for JS development, and there's absolutely no reason you shouldn't adhere to it.

A typical self-executing anonymous function looks as follows, in its most minimal form:

To do this slightly better, I recommend the following form instead:

In the pattern above, the window and document objects — both rather frequently used — become local-scope variables. Local variables are usually reduced to one or two letter variable names when passed through the most popular JS minifiers, so the size of your code reduces somewhat by using this pattern. I'll come back to the undefined variable in just a bit. Once minified, your code will look something like the following (using closure-compiler in this case). Notice how window and document have been reduced to single letter variable names.

Wait, what? No globals?

Ok, there are some cases when a global variable is absolutely necessary. Since linkage in JS happens through global variables, it might be necessary to add a global variable just to provide an API namespace to your users. Here's what the most popular third-party snippets do:

  • Google Analytics exposes a _gaq variable (docs) with one method on it — push.
  • Facebook exposes a FB variable (docs), which is the namespace for their API. (Using FB also requires you to define a global fbAsyncInit function, which could've been avoided. I guess they're doing what they do for ease of use, even though it's against best practice.)
  • Twitter @Anywhere exposes a twttr variable (docs), which like FB is their API's container namespace.
  • For completeness, Errorception exposes a _errs variable. We currently do not have an API (coming soon!), but this has been left as a placeholder.

Take a moment to think about those variable names. They are pretty unique to the service-provider, and will usually not conflict. The possibility of conflict cannot be completely avoided, but can be reduced significantly by picking one that's unique to your service. So, exporting a global of $, $$ or _ is just a horrible idea, however easy-to-type it may seem.

No modifications to shared objects

Several objects are shared in the JS runtime — even more than you might imagine at first. For example, the DOM is shared, but then so are the String and Number objects. Do not modify these objects either by altering their instances directly or by modifying their prototypes. That's simply not cool.

The only cautious exception to this rule might be if you need to polyfill browser methods. In all honesty, I would avoid using polyfills as much as possible in my third-party code and I don't recommend it at all, but you could do this if you are feeling adventurous. The reason I wouldn't recommend it is two-fold:

  • The code on the page may make assumptions about browser capabilities based on browser detection rather than feature detection. (Remember, even jQuery removed browser detection only recently, and many popular libraries still do a good deal of browser detection.)
  • The code on the page might do object enumeration for...in instead of array iteration for(var i=0;i<len;i++) for iterating through arrays. Even when enumerating object properties (which is a legit case of for...in), the code on the page might not use hasOwnProperty.

Either of these will break the code on the host page. You don't want code on the host page to break just because they've added your script.

No DOM modifications

Just like you don't own the global namespace, you don't own the DOM either. Making any changes to the DOM is simply unacceptable. Do not add properties or attributes to the DOM, and do not add or remove elements in the DOM. Your widget is never important enough to add extra nodes or attributes to any element of the DOM. This is because code on the page might be too tightly dependent on the DOM being one way, and if you modify it their code is likely to break.

That said, there are two permissible cases when you can modify the DOM. The first is when you are expected to present a UI, and the second is when you need to communicate with your server and circumvent the same-origin policy of the browser. In the first case, make the contract explicit by asking for a DOM node within which you will make the modifications, so that the developer of the page knows what to expect. I'll address the second case in detail in the third post in this series.

Make no assumptions about the page

This is the most complex to ensure. Even Google Analytics has had issues with their tracking script because of assumptions they inadvertently made. Steve Souders has enumerated some on his blog. So, for example, you can't even assume that the DOM will have a <head> node even after parsing the HTML! jQuery also had some bugs in their dynamic loader due to assumptions they inadvertently made. It seems that the only real thing you can rely on is that your script tag is on the page (and hence in the DOM). Nothing else should be taken for granted.

Unfortunately, I don't have a clean solution for this problem. The only solution seems to be that you should keep your dependencies on the DOM and to native object to a bare minimum, and test in every environment you can lay your hands on. In Errorception, I test on way more browsers than I'd care about, even including old browser versions and mobile phones. It's the only real way to be sure. I have to do this irrespective of whether I support the browser or not, because it might be perfectly supported by the developers of the page.

undefined redefined

A slightly less scary but equally dangerous problem is that undefined is not a keyword or literal in JavaScript. It really should have been. Since it's not a keyword, it's possible for someone to create a global variable called undefined, and that can mess with your script. If you really need to use undefined, you should ensure that you have a clean, unassigned undefined for your use. There are several ways to make sure you are working with a clean undefined. The anonymous function I've shown above implements one of these mechanisms, such that undefined inside the function is the undefined you expect.

Trust

Before closing this post, I want to touch upon the issue of trust. By adding your script to a page, the developer is knowingly or unknowingly placing a lot of trust in you. I completely dislike this, but unfortunately JavaScript has no built in mechanism to reduce the problems of trust. Several projects exist to reduce the possibilities of vulnerabilities, but they seem too heavy to use. Douglas Crockford has been trying to educate people about the issues, but it seems to be mostly falling on deaf ears.

One post in particular is relevant here: an excellent post by Philip Tellis titled "How much do you trust third-party widgets?". It's a must read to get a gist of the issues surrounding trust in third-party widgets, along with some high-level solutions.

In Part 2…

In the next installment of this article series, I'll talk about strategies to load and bootstrap your code in the page, without causing a performance hit. I'll also be touching upon how, at Errorception, we mitigate the risk of the service going down, if ever — an approach that's definitely not unique to Errorception, but isn't as abundantly used as you think.

Sunday 8 January 2012

A Host of New Features

Just rolled out a new build - the first build of the new year!

Improved duplicate detection

Some time ago I had rolled out the "Mark as Duplicate" feature, which inspected your errors and automatically tried to find if they are duplicates of each other. Today that algorithm underwent a significant improvement, and now does a much better job of finding duplicates. This has been applied to all the existing errors as well, bringing down error count by a massive 93%!

What this means for you is that the number of errors you need to worry about has come down significantly. Like I say, fewer bugs = happy developer. :)

"Mark as Duplicate" no more!

Even though the Mark as Duplicate feature was nice and was very helpful in bringing down bug count, it always felt very clunky and legacy-like. The idea of different bugs reports for the same bug, and links flying between them just felt wrong. It doesn't help productivity if you spend a lot of time just navigating the bug reporting system. It sorely needed improvement.

But why improve it if you can drop the concept completely! Now, all of the duplicates are merged into one bug, while still preserving all of the details that each individual bug had. This gives you details about which browser the error occurred in, which versions of the browser it occurred in, which pages the error was on, and how many times it has occurred in the past, all in one page. All this done while keeping the report pretty. I feel that this is a huge productivity win. Do let me know what you think.

Graphs integrated right into error reports

The graphs feature was an instant hit from when it was launched. It was clear that graphically visualizing information is much more pleasant as compared to loads of text. So, I've taken the idea of graphs to the next level — displaying them within the error report itself. See the following screenshot for an example.

Inline scripts

Inline scripts' detection has been added. This tells you where the breaking script was located — whether it was in an external script file, or it was in an inline script tag on the page. This helps a great deal in debugging the the error, as you know exactly where to look.

Inline scripts usually tend to exist on several pages — for example if they are generated by Rails helpers. We now smartly detect if there are such pages on which the same error has occurred in the same script, and designate them as duplicates of each other. Previously this wasn't a factor that was taken into account when marking bugs as duplicates. This has been the single largest contributor towards reducing total bug count.

When did the error occur?

One critical piece of information for a JavaScript developer is when in the page cycle did the error occur. Errors that happened during bootstrapping (ie. before page load) are very different from errors that happened after page load. This information was never before available, and has now been added to your error logs. This should simplify debugging significantly.

And much more…

That's just a small set of the new features rolled out. There have been improvements all over the place, right from the error listing to the notification emails. There has also been a massive performance boost — previously the report generation slowed down as the number of errors grew. Support for Opera and iOS has been improved since they started supporting window.onerror. However, probably the most important improvements have been in areas that are invisible — this build gives us immense flexibility at the logging and data-model end, so that additional features can be built with amazing ease and at a high pace.

Do let me know what you think of the new features in the comments or over email. Feedback is always welcome. Wish you a great year ahead!