How to clear all inline styles from an HTML element

In short: set the styles to the empty string to clear all styles (for example, myElement.style.cssText = "";).

As a front-end developer at Braintree, I deal with the DOM a lot. For reasons I could bore you with, I needed to use JavaScript to clear all inline styles (but not styles applied from CSS) from an HTML element.

After trying a few less-than-ideal solutions, I found a one-liner that solved the problem: all I had to do was set its cssText to the empty string, like this:

myElement.style.cssText = "";

That cleared all inline styles! As far as I can tell, this worked in every browser I tested (though I didn't test less than IE8).

Hopefully this little trick can help you.

Overwriting document.head in strict mode on Safari

In short: you can't overwrite document.head in strict mode if you're on Safari, so be careful with your polyfills.

This is a pretty niche post, but I ran into this problem today.

document.head is a convenient reference to the <head> element that you can reference from JavaScript. Unfortunately, like many convenient features, not all browsers support it.

Luckily, it's an easy fix. Mathias Bynens has a helpful post where he shows how to polyfill it. It's a one-liner:

// Credit to Mathias Bynens for this line!
document.head = document.head || document.getElementsByTagName('head')[0];

This has the nice benefit that it works in all browsers, old and new...except for Safari when you're in strict mode.

Safari (both on desktop and on iOS) will throw an error when you try to overwrite document.head if you're in strict mode. This means that the following function will always throw an error:

function polyfillDocumentHead() {
  'use strict';
  document.head = document.head || document.getElementsByTagName('head')[0];
}

Now that we know that this is an issue, we have a couple of options.

  1. We can use a second example from the original blog post:

    // Credit to Mathias Bynens again!
    document.head || (document.head = document.getElementsByTagName('head')[0]);
    

    This will only reassign it if it isn't defined, which shouldn't happen on Safari. Unfortunately, linters will complain about this line by default (that includes JSLint, JSHint, and ESLint). You can use your favorite linter's "don't lint this line" feature or disable the checks for that entirely.

  2. You can never reassign document.head and simply assign it to a new variable.

    var head = document.head || document.getElementsByTagName('head')[0];
    

    If you're encountering this problem in a CommonJS environment (like Browserify or Webpack), you can use my new document.head npm module. It works just like the above, but it might save you from having to write the line above every single time. You use it like this:

    var head = require('document.head');
    

    The whole module is one line!

  3. We could sidestep this problem entirely by using a selector library like jQuery.

And there you have it: properly shimming document.head when you're in strict mode and on Safari! I do not expect this niche post to make it to the front page of anything other than this blog.

Pietime, my entry to JS1k 2015

For the uninitiated, JS1k is a JavaScript code golfing competition. To quote its about page, entrants "submit a self-contained demo in 1024 bytes of pure JS, which in turn may use various web technologies." In other words: see how much you can fit into just one kilobyte of JavaScript code.

There have been some incredible entries. 2013's winner might be my favorite, but there are plenty of other amazing submissions. It's almost spooky to see how much one carefully crafted kilobyte of JavaScript can produce nowadays!

I've been entering since 2013, but I actually placed in the top ten this year! You can check out my submission here. It lets you tell time using a non-traditional method: pie charts.

Many of the lessons of Daniel LeCheminant's four-kilobyte StackOverflow clone were helpful when squeezing my entry into the byte limit. Perhaps the biggest lesson was unintuitive: repeat yourself! The JSCrush JavaScript compressor can better compress repeated code than fewer characters. That's why my code has lots of lines like this:

canvas.c.beginPath();
canvas.c.moveTo(s / 2, s / 2);
canvas.c.arc(s / 2, s / 2, s * 0.45, 0, 2 * Math.PI);
canvas.c.stroke();

I could've used with, but that turned out to compress worse than repeating myself like that. That was surprising to me!

Give my submission a look if you'd like, but definitely check out the other entries from this year—there are some really cool ones.

"This Incredible Hospital Robot Is Saving Lives. Also, I Hate It"

From an article on Wired:

Maybe that's why super-intelligent robots make us uncomfortable. It's not just fear that they're dangerous or are going to steal our jobs—it's envy. They're not saddled with emotions. They do everything perfectly. They roll about assuming they can woo us with cute beeps and smooth lines like "thank you." I, for one, shan't be falling for it.

Scrape Delicious bookmarks with 3 command-line tools

I use Delicious for syncing my bookmarks. I wanted to have my bookmarks offline. I didn't want to wrestle the Delicious API, so I tried writing a Bash one-liner. Every problem can be solved by an unwieldy Bash one-liner, right?

First, you'll need to install three things.

  1. dels, a Delicious command-line interface. It's a Ruby gem, so install it with gem install delicious-cli.
  2. jq for parsing JSON. Check out its download instructions.
  3. wget for grabbing URLs. brew install wget can install this on OSX, and there are other ways for other systems.

Once you've installed dels, you'll need to authenticate by just running dels with no arguments. To sync your bookmarks (whenever), type dels -s. You'll probably want to do this before you run your backup so that you have the latest from Delicious.

We'll combine these three commands, like this:

dels -aj | jq '.[] | .href' | xargs -L 1 wget --page-requisites --convert-links --timestamping --adjust-extension --quiet --show-progress

That command above scrapes all of your bookmarks into the current directory. If you want to blindly trust me, just run that command. If you'd like an explanation, read on.

dels -aj will output all of your bookmarks as a JSON array. The -a flag outputs all links and the -j flag outputs the results as JSON. That's step 1; pretty easy! The result might look something like this:

[
  {
    "description": "Indie Game Jams",
    "extended": "",
    "hash": "f041d62bc96a7ee2bf02896ee8cb06e9",
    "href": "http://www.indiegamejams.com/",
    "private": "no",
    "shared": "yes",
    "tag": "indie competition contest game gamedev",
    "time": "2014-12-16T17:02:49+00:00",
    "time_string": "2014-12-16T17:02:49Z"
  },
  {
    "description": "burrito-eating champion",
    ...

Next, we pipe it to the jq '.[] | .href' bit. .[] will give you all of the results of an array, and then we grab all of the hrefs out of our response. Now we have all of our links! The output might now look something like this:

http://www.indiegamejams.com/

http://www.nydailynews.com/life-style/eats/man-wins-burrito-eating-championship-chowing-33-10-minutes-article-1.403877

...

Now we want to scrape all of these, and we'll use wget for that. We'll combine several of its command line flags:

  • --page-requisites downloads all of the necessary files for an HTML page, from CSS to images.
  • --convert-links converts "the links in the document to make them suitable for local viewing". In other words, if you downloaded http://example.com/index.html and it contains a link to a relative resource like <a href="about.html">, wget will rewrite the HTML and make it <a href="http://example.com/about.html">.
  • --timestamping keeps you from redownloading everything if you run this command again—you'll only grab what you need to.
  • --adjust-extension will add the .html extension to pages that don't have a file extension (where appropriate).
  • --quiet and --show-progress will keep your terminal from overflowing with information, but you'll still have some.

To pipe each URL to wget, we'll use xargs. We'll pipe it like this:

... | xargs -L 1 wget --page-requisites ...

The -L 1 runs wget on each line. And that's everything! Once again, here's the full command:

dels -aj | jq '.[] | .href' | xargs -L 1 wget --page-requisites --convert-links --timestamping --adjust-extension --quiet --show-progress

Run this and scrape your Delicious bookmarks!