Engineer @ Google. Working on Chrome & the web.
Github: github.com/ebidel | Twitter: @ebidel

Using http/2 for App Engine Local Development

I’ve been using App Engine for many years to develop web apps at Google. Most of the open source projects I’ve worked on use it for its simplicity and scalability. A couple of examples are the Google I/O web app, the Chrome team’s Chromestatus.com, Polymer’s site, developers.chrome.com, and WebFundamentals, just to name a few. Heck, I’m even using it to build my wedding website!

http/2 brings development and production closer together

I could say a ton of nice things about App Engine, but one of its huge drawbacks is that the local development environment is far from Google’s production environment (where your app actually runs). One of the most important differences is that App Engine’s development server uses http/1.1 and Google’s infrastructure uses http/2 (h2).

Besides being 16 years in the making, h2 offers many performance improvements over it’s 1.1 predecessor: multiplexing, header compression, server push). In a nutshell, this is difference between the two:

http/1.1 vs. http/2

One of things I’m most excited about is that h2 makes development closer to production. By that I mean the shape of a web app doesn’t change when we hit the deploy button.

If you’re like me, you probably develop source code in small, individual files. That’s good for organization, maintainability, and our general sanity :) But for production, we create an entirely different story. We roll sprite sheets, domain shard, concatenate CSS, and bundle massive amounts of JS together to squeeze out every last drop of performance. With h2, all of those techniques become a thing of the past; and in fact, anti-patterns.

The h2 protocol makes it more efficient to serve many small files rather than a few large ones.

Small, individual files can lead to improved performance through better HTTP caching. For example, a one-line change won’t invalidate 300KB of bundled JavaScript. Instead, a single file gets evicted from the browser’s cache and the rest of your code is left alone.

So…http/2 is pretty great. All major browsers support it and large cloud/CDN providers have finally started to bake it in. The place that’s still lacking is our development setups (remember my peeve about keeping dev ~= prod).

Since I use App Engine all the time, I wanted a way to close the gap between its prod and dev environment and utilize http/2 on App Engine’s dev server. Turns out, that’s not too hard to do.

Enabling h2 with the App Engine dev server

It’s hard to performance tune an app when the HTTP protocol it uses locally is different than that of production. We want both environments to be as close as possible to each other.

To get dev_appserver.py serving resources over h2, I setup a reverse proxy using a server that supports h2 out of the box. I recommend nginx because it’s fast, easy to setup, and easy to configure. The second thing we’ll need to do is setup localhost to serve off https. That sounds scary, but it’s fairly straightfoward

SSL is not a requirement of the h2 protocol but all browsers have mandated it for http/2 to work and many new (service worker) and old (getUserMedia, geo location) web platorm APIs are requiring it.

Setting up nginx as reverse proxy to App Engine

First, I installed nginx using Homebrew:

brew install --with-http2 nginx

Nginx is supposed to support h2 out of the box since v1.9.5, but I had to install it using --with-http2 to get the goodies.

Homebrew installs nginx to ~/homebrew/etc/nginx and will server static assets from ~/homebrew/var/www/.

The Nginx install also creates a ~/homebrew/etc/nginx/servers directory where you can stick custom server configurations.

To add a server, create ~/homebrew/etc/nginx/servers/appengine.conf with:

server {
    listen          3000;
    server_name     localhost;

    # If nginx cant find file, forward request to GAE dev server.
    location / {
        try_files   $uri   @gae_server;
    }

    location @gae_server {
        proxy_pass   http://localhost:8080;
    }
}

What this does is forward any requests that nginx can’t find (right now that’s all of them) to your GAE app running on 8080.

Next, fire up your GAE app on 8080:

cd your_gae_app;
dev_appserver.py . --port 8080

and start nginx:

nginx

If you need to stop the server, run:

nginx -s stop

At this point, you should be able to open http://localhost:3000/ and see your GAE app! Requests are still over http/1.1 because we haven’t setup SSL yet.

Still on http 1.1

Enabling SSL for localhost (ngnix)

First, generate a self-signing certificate in ~/homebrew/etc/nginx/:

sudo openssl req -x509 -sha256 -newkey rsa:2048 \
    -keyout cert.key -out cert.pem \
    -days 1024 -nodes -subj '/CN=localhost'

This will create a private key (cert.key) and a certificate (cert.pem) for the domain localhost.

Next, modify appengine.conf like so:

server {
    listen          443 ssl http2;
    server_name     localhost;

    ssl                        on;
    ssl_protocols              TLSv1 TLSv1.1 TLSv1.2;
    ssl_certificate            cert.pem; # or /path/to/cert.pem
    ssl_certificate_key        cert.key; # or /path/to/cert.key

    location / {
        try_files   $uri   @gae_server;
    }

    location @gae_server {
        proxy_pass   http://localhost:8080;
    }
}

The first couple of lines enable the ssl and http2 modules on localhost:443. The next few instruct the server to read your private key and certificate (the ones you just generated). The rest of the file remains the same as before.

The OS will throw permission errors for opening ports under 1024, so you’ll need to run nginx using sudo this time. The following command worked for me but you might be able to get away with just sudo nginx:

Start nginx using sudo:

sudo ~/homebrew/bin/nginx

Open https://localhost/ (note the “https”) and you’ll get a big ol’ security warning from the browser:

localhost SSL cert warning

Don’t worry! We know that we’re legit. Click “ADVANCED”, and then “Proceed to localhost (unsafe)”.

Note: if you really want the green lock, check out the instructions here to add the self signed certificate as a trusted certificate in MacOS System Keychain.

Hitting refresh again on https://localhost/ should give you responses over h2:

localhost over SSL

Take this in. Your local GAE app is running over SSL and using http/2 to serve requests!

What about h2 server push?

Tip see my drop-in http2push-gae library for doing h2 push on Google App Engine.

At the time of writing, Nginx doesn’t support h2 server push, but that doesn’t mean we can’t test with it locally!

h2o is another modern h2 server that’s even easier to configure, comes with an up-to-date h2 implementation, and supports server push out of the box.

First, install h2o using Homebrew:

brew install h2o

By default, h2o installs to ~/homebrew/bin/h2o and will serve static files from ~/homebrew/var/h2o/. You can change where files are served by editing ~/homebrew/etc/h2o/h2o.conf.

Start a web server and verify that you see the default index.html page on http://localhost:8080/:

h2o -c ~/homebrew/etc/h2o/h2o.conf 

Next, create a new config, ~/homebrew/etc/h2o/appengine.conf:

hosts:
  "localhost":
    listen:
      port: 3000
    paths:
      "/":
        proxy.reverse.url: http://localhost:8080/

In the example, I’ve done the same thing as the ngnix setup. We’ve setup a server on port 3000 that will forward all requests to App Engine running on port 8080.

Enabling SSL for localhost (h2o)

First, copy over your cert and key from the ngnix steps above (or generate new ones):

cp ~/homebrew/etc/nginx/cert.key ~/homebrew/etc/h2o/
cp ~/homebrew/etc/nginx/cert.pem ~/homebrew/etc/h2o/

Modify ~/homebrew/etc/h2o/appengine.conf to include an entry for localhost:443:

hosts:
  "localhost:443":
    listen:
      port: 443
      ssl:
        certificate-file: cert.pem
        key-file:         cert.key
    paths:
      "/":
        proxy.reverse.url: http://localhost:8080/

Start the server using sudo (again, because we’re opening a special port, 443):

sudo ~/homebrew/bin/h2o -c ~/homebrew/etc/h2o/appengine.conf 

Be sure you’ve started the GAE dev server (dev_appserver.py . --port 8080), and open https://localhost to see your running GAE app. Any resources that contain a Link rel=preload header will be server pushed by h2o:

h2 pushed resources

If you want to determine if a resource is being pushed, look for the x-http2-push: pushed header in the response. h2o will set that header on pushed resources. Alternatively, you can drill into Chrome’s chrome://net-internals to verify pushed resources.

<code>x-http2-push: pushed</code> header

Maximize perf: speeding up static resources

If you want even more speed, you can have nginx or h2o serve your static files directly instead proxying them to the dev server. Both servers are much faster than dev_appserver.py and will better mimic production App Engine.

Configuring Ngnix to server static resources

Add root /path/to/gae_app/src; to your server config:

server {
    listen          443 ssl http2;
    server_name     localhost;
    root            /path/to/gae_app/src; # add this

    ssl                        on;
    ssl_protocols              TLSv1 TLSv1.1 TLSv1.2;
    ssl_certificate            cert.pem;
    ssl_certificate_key        cert.key;

    location / {
        try_files   $uri   @gae_server;
    }

    location @gae_server {
        proxy_pass   http://localhost:8080;
    }
}

If nginx can find the file within your root, it will serve it directly rather than (needlessly) forwarding it to App Engine. All other requests will be proxied to App Engine as usual.

Configuring h2o to server static resources

Likewise, h2o can be instructed to serve your static files using file.dir. Just specify a URL request -> /path/to/src mapping:

hosts:
  "localhost:443":
    listen:
      port: 443
      ssl:
        certificate-file: cert.pem
        key-file:         cert.key
    paths:
      "/":
        proxy.reverse.url: http://localhost:8080/
      "/static":
        file.dir: /path/to/gae_app/static # add this

Now, all files under https://localhost/static/* will be served by h2o instead of GAE.

Tip: Check your dev server logs to confirm ngnix/h2o are handling the static files. If requests don’t show when you refresh the page, you’re good to go. If requests show up, check that you’re using the correct path for root or file.dir.


And with that, Voila! We’ve got the App Engine development server running fully over http/2.



Credits

These were invaluable resources when researching this post:

Observing your web app

TL;DR: a dozen+ examples of monitoring changes in a web application.


The web has lots of APIs for knowing what’s going on in your app. You can monitor mucho stuff and observe just about any type of change.

Changes range from simple things like DOM mutations and catching client-side errors to more complex notifications like knowing when the user’s battery is about to run out. The thing that remains constant are the ways to deal with them: callbacks, promises, events.

Below are some of use cases that I came up with. By no means is the list exhaustive. They’re mostly examples for observing the structure of an app, its state, and the properties of the device it’s running on.

Listen for DOM events (both native and custom):

window.addEventListener('scroll', e => { ... });   // user scrolls the page.

el.addEventListener('focus', e => { ... });        // el is focused.
img.addEventListener('load', e => { ... });        // img is done loading.
input.addEventListener('input', e => { ... });     // user types into input.

el.addEventListener('custom-event', e => { ... }); // catch custom event fired on el.

Listen for modifications to the DOM:

const observer = new MutationObserver(mutations => { ... });
observer.observe(document.body, {
  childList: true,
  subtree: true,
  attributes: true,
  characterData: true
});

Know when the URL changes:

window.onhashchange = e => console.log(location.hash);    
window.onpopstate = e => console.log(document.location, e.state);

Know when the app is being viewed fullscreen:

document.addEventListener('fullscreenchange', e => console.log(document.fullscreenElement));

Read more

Know when someone is sending you a message:

// Cross-domain / window /worker.
window.onmessage = e => { ... };

// WebRTC.
const dc = (new RTCPeerConnection()).createDataChannel();
dc.onmessage = e => { ... };

Know about client-side errors:

// Client-size error?
window.onerror = (msg, src, lineno, colno, error) => { ... };

// Unhandled rejected Promise?
window.onunhandledrejection = e => console.log(e.reason);

Read more

Listen for changes to responsiveness:

const media = window.matchMedia('(orientation: portrait)');
media.addListener(mql => console.log(mql.matches));

// Orientation of device changes.
window.addEventListener('orientationchange', e => {
  console.log(screen.orientation.angle)
});

Read more

Listen for changes to network connectivity:

// Online/offline events.
window.addEventListener('online', e => console.assert(navigator.onLine));
window.addEventListener('offline', e => console.assert(!navigator.onLine));

// Network Information API
navigator.connection.addEventListener('change', e => {
  console.log(navigator.connection.type, 
              navigator.connection.downlinkMax);
});

Read more

Listen for changes to the device battery:

navigator.getBattery().then(battery => {
  battery.addEventListener('chargingchange', e => console.log(battery.charging));
  battery.addEventListener('levelchange', e => console.log(battery.level));
  battery.addEventListener('chargingtimechange', e => console.log(battery.chargingTime));
  battery.addEventListener('dischargingtimechange', e => console.log(battery.dischargingTime));
});

Read more

Know when the tab/page is visible or in focus:

document.addEventListener('visibilitychange', e => console.log(document.hidden));

Read more

Know when the user’s position changes:

navigator.geolocation.watchPosition(pos => console.log(pos.coords))

Know when the permission of an API changes:

const q = navigator.permissions.query({name: 'geolocation'})
q.then(permission => {
  permission.addEventListener('change', e => console.log(e.target.state));
});

Read more

Know when another tab updates localStorage/sessionStorage:

window.addEventListener('storage', e => alert(e))

Know when an element enters/leaves the viewport (e.g. “Is this element visible?”):

const observer = new IntersectionObserver(changes => { ... }, {threshold: [0.25]});
observer.observe(document.querySelector('#watchMe'));

Read more

Know when the browser is idle (to perform extra work):

requestIdleCallback(deadline => { ... }, {timeout: 2000});

Read more

Know when the browser fetches a resource, or a User Timing event is recorded/measured:

const observer = new PerformanceObserver(list => console.log(list.getEntries()));
observer.observe({entryTypes: ['resource', 'mark', 'measure']});

Read more

Know when properties of an object change (including DOM properties):

// Observe changes to a DOM node's .textContent.
// From https://gist.github.com/ebidel/d923001dd7244dbd3fe0d5116050d227    
const proxy = new Proxy(document.querySelector('#target'), {
  set(target, propKey, value, receiver) {
    if (propKey === 'textContent') {
      console.log('textContent changed to: ' + value);
    }
    target[propKey] = value;
  }
});
proxy.textContent = 'Updated content!';

Read more

Lastly, if you’re building custom elements (web components), there are several methods, called reactions, that you can define to observe important things in the element’s lifecycle:

class AppDrawer extends HTMLElement {
  constructor() {
    super(); // always need to call super() first in the ctor.
    // Instance of the element is instantiated.
  }
  connectedCallback() {
    // Called every time the element is inserted into the DOM. 
  }
  disconnectedCallback() {
    // Called every time the element is removed from the DOM. 
  }
  attributeChangedCallback(attrName, oldVal, newVal) {
    // An attribute was added, removed, updated, or replaced. 
  }
  adoptedCallback() {
    // Called when the element is moved into a new document.
  }
}
window.customElements.define('app-drawer', AppDrawer);

Read more

Wowza! What’s crazy is that there are even more APIs in the works.

I suppose you could classify some of these examples as techniques or patterns (e.g. reacting to DOM events). However, many are completely new APIs designed for a specific purpose: measuring performance, knowing battery status, online/offline connectivity, etc.

Honestly, it’s really impressive how much we have access to these days. There’s basically an API for everything.


Mistake? Something missing? Leave a comment.

Update 2016-08-17 - added custom element reaction example

Blink. Chrome’s new rendering engine

Chrome is departing WebKit as its rendering engine. This is big news for web developers, so I thought I’d write up my personal take on the matter. Please realize these are my own thoughts and not those of Google.

The new engine is called Blink. It’s open source and based WebKit.

“You’re kidding, right!?”

That was my reaction when I first heard the news. It was quickly followed by: “Won’t this segment the web even further?” and “Great. An additional rendering engine I have to test.” Being a web developer, I feel your pain.

Honestly, I was extremely skeptical about the decision at first. After several conversations with various members of the web platform team here at Google, I was slowly convinced it might not be such a terrible idea after all. In fact, I’m now convinced it’s a good idea for the long term health and innovation of browsers.

Reflecting on Chrome’s mission

Many of you will be in the same skeptic boat I was. But I think it’s worth remembering the continuing goals of the Chromium project.

From day one the Chrome team’s mission has been to build the best browser possible. Speed, security, and simplicity are in its blood. Over the last four years, I have gained a deep respect for our engineering team. They’re some of the most brilliant engineers in the world. If their consensus is that Chrome cannot be the best browser it can be with WebKit at its core, I fully trust and support that decision. After all, these folks know how to build browsers. If you think about it some more things start to make sense. The architecture of today’s web browser is dramatically different than it was back in 2001 (when WebKit was designed).

The irony in all of this is that we were soon destined to have three render engines with Opera’s impending move to WebKit. Even today, Mozilla/Samsung announced their work on a new engine, called Servo. So, we were at three engines. Now we have 4+. Interesting times indeed.

Things we can all look forward to

Ultimately, Chrome is engineering driven project and I’m personally excited about the potential this change offers. Here are a few:

Improved performance & security

Many ideas and proposals have sprung up about things like out of process iframes, moving DOM to JS, multi-threaded layout, faster DOM bindings,…. Big architectural changes and refactorings means Chrome gets smaller, more secure, and faster over time.

Increased transparency, accountability, responsibility

Every feature added to the web platform has a cost. Through efforts like Chromium Feature Dashboard - chromestatus.com, developers will be in the full know about what features we’re adding. New APIs are going under a fine comb before being released. There’s an extensive process for adding new features.

By the way, watch for chromestatus.com to get much more robust in the coming months. I’m personally helping with that project. Look forward to it’s v2 :)

No vendor prefixes

What a debacle vendor prefixes have been! Features in Blink are going to be implemented unprefixed and kept behind the “Enable experimental web platform features” flag until they’re ready for prime time. This is a great thing for authors.

Testing Testing Testing

More conformance testing is a win. Period. There’s a huge benefit to all browser vendors when things are interoperable. Blink will be no exception.

Conclusion

I see Blink as an opportunity to take browser engines to the next level. Innovation needs to happen at all levels of the stack, not just shiny new HTML5 features.

Having multiple rendering engines—similar to having multiple browsers—will spur innovation and over time improve the health of the entire open web ecosystem.

If you have burning questions for Blink’s engineering leads (Darin Fisher, Eric Seidel), post them. There will be a live video Q&A tomorrow (Thursday, April 4th) at 1PM PST: developers.google.com/live

Creating .webm video from getUserMedia()

There’s a ton of motivation for being able to record live video. One scenario: you’re capturing video from the webcam. You add some post-production touchups in your favorite online video editing suite. You upload the final product to YouTube and share it out to friends. Stardom proceeds.

MediaStreamRecorder is a WebRTC API for recording getUserMedia() streams (example code). It allows web apps to create a file from a live audio/video session.

MediaStreamRecorder is currently unimplemented in the Chrome. However, all is not lost thanks to Whammy.js. Whammy is a library that encodes .webm video from a list of .webp images, each represented as dataURLs.

As a proof of concept, I’ve created a demo that captures live video from the webcam and creates a .webm file from it.

LAUNCH DEMO

The demo also uses a[download] to let users download their file.

Creating webp images from <canvas>

The first step is to feed getUserMedia() data into a <video> element:

var video = document.querySelector('video');
video.autoplay = true; // Make sure we're not frozen!

// Note: not using vendor prefixes!
navigator.getUserMedia({video: true}, function(stream) {
  video.src = window.URL.createObjectURL(stream);
}, function(e) {
  console.error(e);
});

Next, draw an individual video frame into a <canvas>:

var canvas = document.querySelector('canvas');
ctx.drawImage(video, 0, 0, canvas.width, canvas.height);

Chrome supports canvas.toDataURL("image/webp"). This allows us to read back the <canvas> as a .webp image and encode is as a dataURL, all in one swoop:

var url = canvas.toDataURL('image/webp', 1); // Second param is quality.

Since this only gives us an single frame, we need to repeat the draw/read pattern using a requestAnimationFrame() loop. That’ll give us webp frames at 60fps:

var rafId;
var frames = [];
var CANVAS_WIDTH = canvas.width;
var CANVAS_HEIGHT = canvas.height;

function drawVideoFrame(time) {
  rafId = requestAnimationFrame(drawVideoFrame);
  ctx.drawImage(video, 0, 0, CANVAS_WIDTH, CANVAS_HEIGHT);
  frames.push(canvas.toDataURL('image/webp', 1));
};

rafId = requestAnimationFrame(drawVideoFrame); // Note: not using vendor prefixes!

\m/

The last step is to bring in Whammy. The library includes a static method fromImageArray() that creates a Blob (file) from an array of dataURLs. Perfect! That’s just what we have.

Let’s package all of this goodness up in a stop() method:

function stop() {
  cancelAnimationFrame(rafId);  // Note: not using vendor prefixes!

  // 2nd param: framerate for the video file.
  var webmBlob = Whammy.fromImageArray(frames, 1000 / 60);

  var video = document.createElement('video');
  video.src = window.URL.createObjectURL(webmBlob);

  document.body.appendChild(video);
}

When stop() is called, the requestAnimationFrame() recursion is terminated and the .webm file is created.

Performance and Web Workers

Encoding webp images using canvas.toDataURL('image/webp') takes ~120ms on my MBP. When you do something crazy like this in requestAnimationFrame() callback, the framerate of the live getUserMedia() video stream noticeably drops. It’s too much for the UI thread to handle.

Having the browser encode webp in C++ is far faster than encoding the .webp image in JS.

My tests using libwebpjs in a Web Worker were horrendously slow. The idea was to each frame as a Uint8ClampedArray (raw pixel arrays), save them in an array, and postMessage() that data to the Worker. The worker was responsible for encoding each pixel array into webp. The whole process took up to 20+ seconds to encode a single second’s worth of video. Not worth it.

It’s too bad CanvasRenderingContext2D doesn’t exist in the Web Worker context. That would solved a lot of the perf issues.

Mashups using CORS and responseType=‘document’

I always forget that you can request a resource as a Document using XHR2. Combine this with CORS and things get pretty nice. No need to parse HTML strings and turn them into DOM yourself.

For html5rocks.com, we support CORS on all of our content. It’s trivial to pull down the tutorials page and query the DOM directly using querySelector()/querySelectorAll() on the XHR’s response.

Demo: http://jsbin.com/bovetayuwu

https://gist.github.com/3581825

Data Binding Using data-* Attributes

Custom data-* attributes in HTML5 are pretty rad. They’re especially handy for stashing small amounts of data and retaining minimal state on the DOM. Turns out, they can also be used for one-way data binding!

I’ve been using a nifty trick in recent projects that I thought would be worth sharing. The technique is to use a data attribute to store values (i.e. the data model) and :before/:after pseudo elements to render the values as generated content (i.e. the view). I call it “poor man’s data binding” because it’s not true data binding in the traditional sense, but the semantics are similar. Count it!

Here we go:

<style>
  input {
    vertical-align: middle;
    margin: 2em;
    font-size: 14px;
    height: 20px;
  }
  input::after {
    content: attr(data-value) '/' attr(max);
    position: relative;
    left: 135px;
    top: -20px;
  }
</style>
<input type="range" min="0" max="100" value="25">
<script>
  var input = document.querySelector('input');

  input.dataset.value = input.value; // Set an initial value.

  input.addEventListener('change', function(e) {
    this.dataset.value = this.value;
  });
</script>

TRY IT

Notice the 25/100 updates as you move the slider, but the <input> is the only markup on the page.

The magic line is the content: attr(data-value) '/' attr(max). It uses CSS attr() to pull out the data-value and max attributes; both set using markup on the <input>. As those values change, the generated content is automatically updated. Sick data binding bro.

Really the only benefit of this technique is that we’re not including extraneous markup.

Last but not least, here’s a more complex example that uses CSS transitions to change the height of a div container when clicked. As the height changes, requestAnimationFrame() updates the data-height of the div and the pseudo element picks that up.

[image](http://jsbin.com/zekohaxenu)
[**TRY IT**](http://jsbin.com/zekohaxenu)

I’m sure if HTML was conceived in the age of web apps, we’d have proper DOM/JS data binding by now. Fortunately, initiatives like MDV and Web Components are on their way. One day this stuff will be a reality and native to HTML!

Data binding is a technique for automatically synchronizing data between two sources. On the web, data binding typically manifests itself as updating DOM (UI) in response to events: XHRs, user input, or other business logic doing its thing. Take the canonical todo list for example. When I mark an item as done, the completed count increments. When it’s unchecked, the count decrements. That’s data binding!

If you want true two-way data binding, checkout one of the popular MVC frameworks like Angular, Knockout, or Ember.

idb.filesystem.js - Bringing the HTML5 Filesystem API to More Browsers

The HTML5 Filesystem API is a versatile API that addresses many of the uses cases that the other offline APIs don’t. It can remedy their shortcomings, like making it difficult to dynamically caching a page. I’m looking at you AppCache!

My ♥ for the API is deep–so much so that I wrote a book and released a library called filer.js to help promote its adoption. While filer aims to make the API more consumable, it fails to address the elephant in the room: browser support.

Introducing idb.filesystem.js

Today, I’m happy to bring the HTML5 Filesystem API to more browsers by releasing idb.filesystem.js.

idb.filesystem.js is a well tested JavaScript polyfill implementation of the Filesystem API intended for browsers that lack native support. Right now that’s everyone but Chrome. The library works by using IndexedDB as an underlying storage layer. This means any browser supporting IndexedDB, now supports the Filesystem API! All you need to do is make Filesystem API calls and the rest is magic.

Demos

I’ve thrown together two demo apps to demonstrate the library’s usage. The first is a basic example. It allows you to create empty files/folders, drag files into the app from the desktop, and navigate into a folder or preview a file by clicking its name:

image
Try the [demo](http://html5-demos.appspot.com/static/filesystem/idb.filesystem.js/demos/basic/index.html) in Firefox 11+

Want to use filer.js’s API with idb.filesystem.js? No problem. 90% of filer.js works out of the box with idb.filesystem.js. In fact, the second demo is a slightly modified version of filer.js’s playground app, showing that the two libraries can work in harmony. \m/

What’s exciting is that both of these apps work in FF, Chrome, and presumably other browsers that implement storing binary data in IndexedDB.

I look forward to your feedback and pull requests!

Introducing filer.js

Some 1300+ lines of code, 106 tests, and a year after I first started it, I’m happy to officially unleash filer.js (https://github.com/ebidel/filer.js); a wrapper library for the HTML5 Filesystem API.

Unlike other libraries [1, 2], filer.js takes a different approach and incorporates some lessons I learned while implementing the Google Docs Python client library. Namely, the library reuses familiar UNIX commands (cp, mv, rm) for its API. My goal was to a.) make the HTML5 API more approachable for developers that have done file I/O in other languages, and b.) make repetitive operations (renaming, moving, duplicating) easier.

So, say you wanted to list the files in a given folder. There’s an ls() for that:

var filer = new Filer();
filer.init({size: 1024 * 1024}, onInit.bind(filer), onError);

function onInit(fs) {
  filer.ls('/', function(entries) {
    // entries is an Array of file/directories in the root folder.
  }, onError);
}

function onError(e) { ... }

A majority of filer.js calls are asynchronous. That’s because the underlying HTML5 API is also asynchronous. However, the library is extremely versatile and tries to be your friend whenever possible. In most cases, callbacks are optional. filer.js is also good at accepting multiple types when working with entries. It accepts entries as string paths, filesystem: URLs, or as the FileEntry/DirectoryEntry object.

For example, ls() is happy to take your filesystem: URL or your DirectoryEntry:

// These will produce the same results.
filer.ls(filer.fs.root.toURL(), function(entries) { ... });
filer.ls(filer.fs.root, function(entries) { ... });
filer.ls('/', function(entries) { ... });

The library clocks in at 24kb (5.6kb compressed). I’ve thrown together a complete sample app to demonstrate most of filer.js’s functionality:

image
Try the [DEMO](http://html5-demos.appspot.com/static/filesystem/filer.js/demos/index.html)

Lastly, there’s room for improvement:

  1. Incorporate Chrome’s Quota Management API
  2. Make usage in Web Workers more friendly (there is a synchronous API).

I look forward to your feedback and pull requests!

Making file inputs a pleasure to look at

I’ve seen a lot of people ask how to 1.) apply custom styles to a <input type="file"> and 2.) programmatically open the browser’s file dialog with JavaScript. Turns out, the first is a cinch WebKit. The second comes with a couple of caveats.

If you want to skip ahead, there’s a demo.

Custom file inputs in WebKit

The first example on that demo page shows how to style your basic file input into something great. To achieve magnificence, we start with some standard issue markup:

<input type="file" class="button" multiple>

followed by some semi-rowdy CSS that to hide the ::-webkit-file-upload-button pseudo-element and create a fake button using :before content:

.button::-webkit-file-upload-button {
  visibility: hidden;
}
.button:before {
  content: 'Select some files';
  display: inline-block;
  background: -webkit-linear-gradient(top, #f9f9f9, #e3e3e3);
  border: 1px solid #999;
  border-radius: 3px;
  padding: 5px 8px;
  outline: none;
  white-space: nowrap;
  -webkit-user-select: none;
  cursor: pointer;
  text-shadow: 1px 1px #fff;
  font-weight: 700;
  font-size: 10pt;
}
.button:hover:before {
  border-color: black;
}
.button:active:before {
  background: -webkit-linear-gradient(top, #e3e3e3, #f9f9f9);
}

Reference: i'm just a reference

Since this one is only available in WebKit, I’ve left out the other vendor prefixes.

Programmatically opening a file dialog

No browser that I know of lets you simulate a manual click on a file input without user intervention. The reason is security. Browsers require that a user make an explicit manual click (user-initiated click) somewhere on the page. However, once that happens, it’s straightforward to hijack the click and route it to a file input.

My second technique (see this tweet) for styling a file input works across the modern browsers. It requires a bit of extra markup but allows us to “send” the user’s click to a file input.

The trick is to hide the <input type="file"> by setting it to visibility: hidden; and subbing in an extra <button> to hand the user’s actual click:

<style>
#fileElem {
  /* Note: display:none on the input won't trigger the click event in WebKit.
    Setting visibility: hidden and height/width:0 works great. */
  visibility: hidden;
  width: 0;
  height: 0;
}
#fileSelect {
  /* style the button any way you want */
}
</style>

<input type="file" id="fileElem" multiple>
<button id="fileSelect">Select some files</button>

<script>
document.querySelector('#fileSelect').addEventListener('click', function(e) {
  // Use the native click() of the file input.
  document.querySelector('#fileElem').click();
}, false);
</script>

Reference: i'm just a reference

You’ll be even cooler if you use custom events:

function click(el) {
  var evt = document.createEvent('Event');
  evt.initEvent('click', true, true);
  el.dispatchEvent(evt);
}

document.querySelector('#fileSelect').onclick = function(e) {
  // Simulate the click on fileInput with a custom event.
  click(document.querySelector('#fileElem'));
};

Caveat

Most browsers require the fileInput.click() to be called within 1000ms of the user-initiated click. For example, waiting 1.5s will fail because it’s too long after the user initiates a click:

document.querySelector('#fileSelect').onclick = function(e) {
  setTimeout(function() {
    document.querySelector('#fileElem').click(); // Will fail.
  }, 1500);
};

The cap gives you the chance to call window.open, adjust UI, whatever before the file dialog opens.

Live demo

Support
Basic v1.0 (check for updates)