Pietime, my entry to JS1k 2015

For the uninitiated, JS1k is a JavaScript code golfing competition. To quote its about page, entrants "submit a self-contained demo in 1024 bytes of pure JS, which in turn may use various web technologies." In other words: see how much you can fit into just one kilobyte of JavaScript code.

There have been some incredible entries. 2013's winner might be my favorite, but there are plenty of other amazing submissions. It's almost spooky to see how much one carefully crafted kilobyte of JavaScript can produce nowadays!

I've been entering since 2013, but I actually placed in the top ten this year! You can check out my submission here. It lets you tell time using a non-traditional method: pie charts.

Many of the lessons of Daniel LeCheminant's four-kilobyte StackOverflow clone were helpful when squeezing my entry into the byte limit. Perhaps the biggest lesson was unintuitive: repeat yourself! The JSCrush JavaScript compressor can better compress repeated code than fewer characters. That's why my code has lots of lines like this:

canvas.c.beginPath();
canvas.c.moveTo(s / 2, s / 2);
canvas.c.arc(s / 2, s / 2, s * 0.45, 0, 2 * Math.PI);
canvas.c.stroke();

I could've used with, but that turned out to compress worse than repeating myself like that. That was surprising to me!

Give my submission a look if you'd like, but definitely check out the other entries from this year—there are some really cool ones.

"This Incredible Hospital Robot Is Saving Lives. Also, I Hate It"

From an article on Wired:

Maybe that's why super-intelligent robots make us uncomfortable. It's not just fear that they're dangerous or are going to steal our jobs—it's envy. They're not saddled with emotions. They do everything perfectly. They roll about assuming they can woo us with cute beeps and smooth lines like "thank you." I, for one, shan't be falling for it.

Scrape Delicious bookmarks with 3 command-line tools

I use Delicious for syncing my bookmarks. I wanted to have my bookmarks offline. I didn't want to wrestle the Delicious API, so I tried writing a Bash one-liner. Every problem can be solved by an unwieldy Bash one-liner, right?

First, you'll need to install three things.

  1. dels, a Delicious command-line interface. It's a Ruby gem, so install it with gem install delicious-cli.
  2. jq for parsing JSON. Check out its download instructions.
  3. wget for grabbing URLs. brew install wget can install this on OSX, and there are other ways for other systems.

Once you've installed dels, you'll need to authenticate by just running dels with no arguments. To sync your bookmarks (whenever), type dels -s. You'll probably want to do this before you run your backup so that you have the latest from Delicious.

We'll combine these three commands, like this:

dels -aj | jq '.[] | .href' | xargs -L 1 wget --page-requisites --convert-links --timestamping --adjust-extension --quiet --show-progress

That command above scrapes all of your bookmarks into the current directory. If you want to blindly trust me, just run that command. If you'd like an explanation, read on.

dels -aj will output all of your bookmarks as a JSON array. The -a flag outputs all links and the -j flag outputs the results as JSON. That's step 1; pretty easy! The result might look something like this:

[
  {
    "description": "Indie Game Jams",
    "extended": "",
    "hash": "f041d62bc96a7ee2bf02896ee8cb06e9",
    "href": "http://www.indiegamejams.com/",
    "private": "no",
    "shared": "yes",
    "tag": "indie competition contest game gamedev",
    "time": "2014-12-16T17:02:49+00:00",
    "time_string": "2014-12-16T17:02:49Z"
  },
  {
    "description": "burrito-eating champion",
    ...

Next, we pipe it to the jq '.[] | .href' bit. .[] will give you all of the results of an array, and then we grab all of the hrefs out of our response. Now we have all of our links! The output might now look something like this:

http://www.indiegamejams.com/

http://www.nydailynews.com/life-style/eats/man-wins-burrito-eating-championship-chowing-33-10-minutes-article-1.403877

...

Now we want to scrape all of these, and we'll use wget for that. We'll combine several of its command line flags:

  • --page-requisites downloads all of the necessary files for an HTML page, from CSS to images.
  • --convert-links converts "the links in the document to make them suitable for local viewing". In other words, if you downloaded http://example.com/index.html and it contains a link to a relative resource like <a href="about.html">, wget will rewrite the HTML and make it <a href="http://example.com/about.html">.
  • --timestamping keeps you from redownloading everything if you run this command again—you'll only grab what you need to.
  • --adjust-extension will add the .html extension to pages that don't have a file extension (where appropriate).
  • --quiet and --show-progress will keep your terminal from overflowing with information, but you'll still have some.

To pipe each URL to wget, we'll use xargs. We'll pipe it like this:

... | xargs -L 1 wget --page-requisites ...

The -L 1 runs wget on each line. And that's everything! Once again, here's the full command:

dels -aj | jq '.[] | .href' | xargs -L 1 wget --page-requisites --convert-links --timestamping --adjust-extension --quiet --show-progress

Run this and scrape your Delicious bookmarks!

Skip the header of a file with Python's CSV reader

I was choosing a new credit card and was between two options. One of them offered cash back on all purchases. The other offered less cash back on all purchases but much more cash back on certain purchases. I wanted to know: which credit card was better based on my purchase history? Like any normal person, I exported my transactions as CSV and began to dig around with Python.

My CSV file had a header on the first line, which was different from the rest. The file looked something like this (not my actual transactions):

Date,Description,Amount
2015-01-03,Cakes,22.55
2014-12-28,Rent,1000
2014-12-27,Candy Shop,12
...

I wanted to use Python's built-in CSV reader class and skip any parsing of the header line. So I did this:

with open('mycsv.csv', 'r') as csvfile:

    csvreader = csv.reader(csvfile)

    # This skips the first row of the CSV file.
    # csvreader.next() also works in Python 2.
    next(csvreader)

    for row in csvreader:
        # do stuff with rows...

The call to next reads the first row and discards it. From there, we're ready to iterate through the actual data! One small caveat: I had issues in Python 3 when opening the file in binary mode (rb instead of r).

This trick helped me choose my credit card and may help you do the same with CSVs in Python!

"Raising the wall" for open source

From Jenn Schiffer's interview on JavaScript Jabber:

If you want people to contribute, that's ... also really important. So I can dilute my project with all these frameworks and task builders and stuff like that, but every time you add something like that, you're raising the wall that keeps the project between you and the people that you want to develop with it.

In other words, you should lower the barrier to entry as much as you can; I think this is often more valuable than having the newest tech stack.

A great interview, discussing art as an "easy gateway" into teaching programming.