The Notepad ’s full of ballpoint hypertext

Safari Speedbumps

Scribbled  · PDF · ◊ Pollen source

For a long time now, I’ve had a problem with Safari taking a long time to load a website when I first navigate to it: there will be a long pause (5–10 sec) with no visible progress or network traffic. Then there will be a burst of traffic, the site will load fully inside of a second, and every page I visit within that site afterwards will be lightning fast.

The same thing happens whether I’m at work, at home, or on public wifi (using a VPN of course). I’ve tried disabling all extensions and I’ve also tried using Chrome. So this was mystifying to me.

But I think I might have finally found the source of the problem. I was in Safari’s preferences window and noticed this little warning on the Security tab:

Safari preferences pane showing a problem with the 'Safe Browsing Service'
Safari preferences pane showing a problem with the ‘Safe Browsing Service’

I unchecked that box, and the problem seems to have disappeared.

Now, I haven’t yet been able to find any official information on exactly how Safe Browsing Service works, but it’s not hard to make an educated guess. If it’s turned on, the first time you browse to a website, the name of that website would first get sent, in a separate request, to Apple’s servers, which would return a thumbs up/thumbs down type of response. A problem on Apple’s end would cause these requests to time out, making any website’s first load terribly slow. And as the screenshot shows, clearly there is a problem on Apple’s end, because the Safe Browsing Service is said to be “unavailable”. (It says it’s only been unavailable for 1 day but I have reason to believe that number just resets every day.)

The fact that disabling the setting on Safari fixed the problem in Chrome too leads me to believe that this is in fact an OS-level setting, enforced on all outgoing HTTP requests, not just a Safari preference.

Anyway, if you are having this problem, see if disabling Safe Browsing Service solves it for you.


Site Incident Report

Scribbled  · PDF · ◊ Pollen source

Important notice: last Tuesday night all my sites went offline. In the interests of extreme transparency, I present this complete incident report and postmortem.

Impact

  1. All of my websites that still use Textpattern were broken from 11pm Jan 31 until about lunchtime the next day. (The Notepad does not use Textpattern and was mostly unaffected, except for #2 below.)
  2. All the traffic logged on all sites during that time was lost. So I have no record of how many people used my websites during the time that my websites were unusable.

Timeline

(All times are my time zone.)

  1. Tuesday Jan 31, late evening: I logged into my web server and and noticed a message about an updated version of my Ubuntu distribution being available. I was in a good mood and ran the do-release-upgrade command, even knowing it would probably cause problems. Because breaking your personal web server’s legs every once in a while is a good way to learn stuff. If I’d noticed that this “update” proposed to take my server from version 14.04 all the way to 16.04, I’d have said Hell no.
  2. In about half an hour the process was complete and sure enough, all my DB-driven sites were serving up ugly PHP errors.

Recovery

  1. Soon determined that my Apache config referred to non-existant PHP5 plugin. Installed PHP7 because why the hell not.
  2. More errors. The version of Textpattern I was using on all these sites doesn’t work with PHP7. Installed the latest version of Textpattern on one of the sites.
  3. Textpattern site still throwing errors because a few of my plugins didn’t like PHP7 either. Logged into the MySQL cli and manually disabled them in the database.
  4. Textpattern’s DB upgrade script kept failing because it doesn’t like something about my databases. I began the process of hand-editing each of the tables in one of the affected websites.
  5. Sometime around midnight my brother texted asking me to drive over and take him in to the emergency room. I judged it best to get over there in a hurry so I closed up my laptop and did that. His situation is a little dicey right now; it was possible that when I got there I’d find him bleeding or dying. That wasn’t it, thankfully. By four in the morning they had him stabilized and I was able to drive home.
  6. Morning of Feb 1st: I got out of bed at around eight on the morning of Feb 1st, made myself some coffee and emailed my boss to tell him I wouldn’t be in the office until nine-thirty.
  7. After driving in to work, I remembered almost all of my websites were still busted. I started to think about the ramifications. I wondered if anyone had noticed. I opened Twitter for the first time since before the election and closed it again, appalled.
  8. At lunchtime I drove to the coffee shop for some more caffeine and a sandwich. I remember it got up to 30º F that day so I almost didn’t need a coat. After I ate my sandwich I pulled out my laptop and resumed poking around the same database and trying to swap in all the mental state from before the hospital trip.
  9. Towards the end of my lunch hour I decided that this wasn’t fun anymore. Maybe I could poke this one database until Textpattern would stop whining about it, but there was still the matter of the broken plugins, and then I’d have to go through the same rigmarole for the other three sites.
  10. Sometime between noon and 1pm I logged into my DigitalOcean dashboard and clicked a button to restore the automatic backup from 18 hours ago. In two minutes it was done and all the sites were running normally.

Problems Encountered

  1. In-place OS upgrades across major releases will always break your stack
  2. Textpattern 4.5.7 doesn’t support PHP7
  3. Textpattern 4.6.0 needs a bunch of hacks to work with newer versions of MySQL
  4. Emergency rooms always have so much waiting time in between tests and stuff

Post-Recovery Followup Tasks

  1. Leave the goddamn server alone
  2. Revisit shelved projects that involve getting rid of Textpattern and MySQL.

Advent of Code 2016

Scribbled  · PDF · ◊ Pollen source

I’m giving this year’s Advent of Code event a shot.

Since I’m also using this as a way of learning a little about literate programming, the programs I write are also web pages describing themselves. I’m uploading those web pages to a subsection of this site, where you can read my solutions and watch my progress.


Flattening a Site: From Database to Static Files

Scribbled  · PDF · ◊ Pollen source

I just finished converting a site from running on a database-driven CMS (Textpattern in this case) to a bunch of static HTML files. No, I don’t mean I switched to a static site generator like Jekyll or Octopress, I mean it’s just plain HTML files and nothing else. I call this “flattening” a site.I wanted a way to refer to this process that would distinguish it from “archiving”, which to me also connotes taking the site offline. I passed on “embalming” and “mummifying” for similar reasons.

In this form, a web site can run for decades with almost no maintenance or cost. It will be very tedious if you ever want to change it, but that is fine because the whole point is long-term preservation. It’s a considerate, responsible thing to do with a website when you’re pretty much done updating it forever. Keeping the site online prevents link rot, and you never know what use someone will make of it.

How to Flatpack

Before getting rid of your site’s CMS and its database, make use of it to simplify the site as much as possible. It’s going to be incredibly tedious to fix or change anything later on so now’s the time to do it. In particular you want to edit any templates that affect the content of multiple pages:

Next, on your web server, make a temp directory (outside the site’s own directory) and download static copies of all the site’s pages into it with the wget command:

wget --recursive --domains howellcreekradio.com --html-extension howellcreekradio.com/

This will download every page on the site and every file linked to on those pages. In my case it included images and MP3 files which I didn’t need. I deleted those until I had only the .html files left.

Digression: Mass-editing links and filenames from the command line

This bit is pretty specific to my own situation but perhaps some will find it instructive. At this point I was almost done, but there was a bit of updating to do that couldn’t be done from within my CMS. My home page on this site had “Older” and “Newer” links at the bottom in order to browse through the episodes, and I wanted to keep it this way. These older/newer links were generated by the CMS with POST-style URLS: http://site.com/?pg=2 and so on. When wget downloads these links (and when the --html-extension option is invoked), it saves them as files of the form index.html?pg=2.html. These all needed to be renamed, and the pagination links that refer to them needed to be updated.

I happen to use ZSH, which comes with an alternative to the standard mv command called zmv that recognizes patterns:

zmv 'index.html\?pg=([0-9]).html' 'page$1.html'
zmv 'index.html\?pg=([0-9][0-9]).html' 'page$1.html'

So now these files were all named page01.html through page20.html but they still contained links in the old ?pg= format. I was able to update these in one fell swoop with a one-liner:

grep -rl \?pg= . | xargs sed -i -E 's/\?pg=([0-9]+)/page\1.html/g'

To dissect this a bit:

OK, digression over.

Back up the CMS and Database

Before actually switching, it’s a good idea to freeze-dry a copy of the old site, so to speak, in case you ever needed it again.

Export the database to a plain-text backup:

mysqldump -u username -pPASSWORD db_name > dbbackup.sql

Then save a gzip of that .sql file and the whole site directory before proceeding.

Shutting down the CMS and swapping in the static files

Final steps:

  1. Move the HTML files you downloaded and modified above into the site’s public folder.
  2. Add redirects or rewrite rules for every page on your site. For example, if your server uses Apache, you would edit the site’s .htaccess file so that URLs on your site like site.com/about/ would be internally rewritten as site.com/about.html. This is going to be different depending on what CMS was being used, but essentially you want to be sure that any URL that anyone might have used as a link to your site continues to work.
  3. Delete all CMS-related files from your site’s public folder (you saved that backup, right?) In my case I deleted index.php, css.php, and the whole textpattern/ directory.

Once you’re done

Watch your site’s logs for 404 errors for a couple of weeks to make sure you didn’t miss anything.

What to do now? You could leave your site running where it is. Or, long term, consider having it served from a place like NearlyFreeSpeech for pennies a month.


Splitting Pollen tags with Racket macros

Scribbled  · PDF · ◊ Pollen source

This may be one of the nerdiest things I have ever written, but I know there may be three or five people who will find it useful. This post is specifically for people who are using Pollen to generate content in multiple output formats, and who may also be using a separate build system like make.

Normally when targeting multiple output formats in Pollen, you’d write a tag function something like this:

📄 pollen.rkt
; …

(define (strong . xs)
  (case (current-poly-target)
    [ltx (string-append "\\textbf{" ,@xs "}")]
    [else `(strong ,@xs)]))

; …

Here, everything for the strong tag is contained in a single tidy function that produces different output depending on the current output format. This is fine for simple projects, but not ideal for more complex ones, for a couple of reasons.

First there’s the issue of tracking dependencies. Let’s say every Pollen file in your project gets rendered as an HTML file and as part of a PDF file. Then one day you make a small change in your pollen.rkt file. Does this edit affect just the HTML files? Or the PDF files? Or both? Which ones now need to be rebuilt? If you’re doing things as shown above, there’s no straightforward way for Pollen (or make) to determine this; you’ll have to rebuild all the output files every time.

Then there’s the issue of readability. Even with two possible output formats, pollen.rkt gets much more difficult to read. I didn’t even want to think about how hairy it would get at three or four.

I decided to address this by having each output format get its own separate .rkt file, containing its own definitions for each tag function, prefixed by the output format:

📄 html-tags.rkt
(define (html-strong attrs elements)
  `(strong ,attrs ,@elements))
📄 pdf-tags.rkt
(define (pdf-strong attrs elements)
  (string-append "\\textbf{" ,@xs "}"))

That part is simple enough. But you also need a way for pollen.rkt to branch to one tag or the other depending on the current poly target.

To handle this part, I wrote a macro, poly-branch-tag, which allows you to define a tag that will automatically call a different tag function depending on the current output format. The macro is rather long, but you can view it in the polytag.rkt file of this blog’s source code at Github.

Defining tag functions with poly-branch-tag

To use this macro, first copy the polytag.rkt file from this blog’s source code into your project.

You then include polytag.rkt and declare your Pollen tags using the macro. The first argument is the tag name, optionally followed by a single required attribute and/or as any number of optional attributes with default values:

📄 pollen.rkt
#lang racket
(require pollen/setup)
(require "polytag.rkt")
(require "html-tags.rkt" "pdf-tags.rkt")

; Define our poly targets as usual
(module setup racket/base
    (provide (all-defined-out))
    (define poly-targets '(html pdf)))

; Simple tag with no required or default attributes
(poly-branch-tag strong)

; Tag with a single required attribute
(poly-branch-tag link url)

; Tag with required attribute + some optional attrs w/defaults
(poly-branch-tag figure src (fullwidth #f) (link-url ""))

For every tag function declared this way, write the additional functions needed for each output type in your (setup:poly-targets). E.g., for strong above, we would define html-strong and pdf-strong inside their respective .rkt files.

These tag functions should always accept exactly two arguments: a list of attributes and a list of elements. The macro will ensure that any required attribute is present and any default values are applied. Here’s an example:

📄 html-tags.rkt
(define (html-figure attrs elems)     ; Important! Tag name must have html- prefix
  (define src (attr-val 'src attrs))  ; You can grab what you need from attrs
  (if (attr-val 'fullwidth attrs)     ; (I made (attr-val) to accept boolean values in attributes)
      (make-fullwidth)))              ; [dummy example]

The benefits, reiterated

If you use a dependency system like make in your Pollen project, you now have a clear separation between output files in a particular format and the code that produces output in that format. An edit to html-tags.rkt will only affect HTML files. An edit to pdf-tags.rkt will only affect PDF files. You can see this blog’s makefile for a detailed example.

It’s also easier to add output formats without losing your sanity. Each output format gets its own .rkt file where you can define your tag functions all the way up to root, and the logic for each output format is much easier to follow than if they were all jammed in together in one file.

Finally, I found that there’s a third benefit, delightful and unintended, that comes with this approach as well: pollen.rkt, stripped of all function definition code, becomes essentially a very readable, self-updating cheatsheet of your project’s tags. See what I mean in this blog’s pollen.rkt. This alone might almost tempt me to use poly-branch-tag even in projects where HTML is the only format being targeted.


Testing network switches

Scribbled  · PDF · ◊ Pollen source

A couple of weeks ago, one of the two Netgear GS748T network switches in our main office failed. The lights were still blinking, but nothing connected to it was able to talk to anything else. We were able to plug almost everyone in to the other switch, the rest we put on a temporary 8-port switch until we could get a replacement.

We ordered two more of these switches off eBay (a replacement plus a spare), and those arrived today. After testing both of them, I was able to swap out the bad one and figure out exactly what had happened to it.

How I test a switch

This is pretty basic and generic, but maybe someone will find it useful.

  1. Grab the reference manual and hard-reset the switch to factory defaults.
  2. Connect directly to the switch with an ethernet cable. Does the port light up?
  3. Manually set your computer’s IP address to correspond to the switch’s defaults. In this case, the Netgear’s default IP is 192.168.0.239 with a subnet mask of 255.255.255.0, so I set my computer to 192.168.0.20 and the same subnet.
  4. Try to ping the switch at its default address. Does it respond? If not, plug in another computer and set its IP address manually as well. Can you ping it? Try it across several ports.
  5. From your browser, try to log in to the switch’s web interface. In this case I browsed to http://192.168.0.239 and was greeted with the login screen.
  6. Try transferring data between two computers connected through the switch. In my case I was testing with two Windows machines, so I used NetCPS to benchmark these transfers. Again, use several different ports. If the ports are visibly divided between “banks” of 4 or 8 ports, test each bank. (Testing each individual port is overkill in most cases.)
  7. Managed switchesThe GS748T doesn’t have a separate console port or a CLI, so this point wasn’t applicable in this particular case. On another occasion though, when I had an HP ProCurve switch that was acting up, connecting via the console port revealed a barrage of error messages and an endless cycle of rebooting. Having a saved copy of this output was very helpful when I was on the phone with the manufacturer demanding a warranty replacement. usually have their own OS with a command-line interface that you can open by connecting through a separate “console” port (either RJ-45 or serial DB-9). Try to log in through this interface and poke around. Refer to the switch’s manual for details.

So what happened here?

In the case of our failing Netgear GS748T, after I pulled it out I found it was still “working”: I could connect to its web interface, and even send data between a couple of computers connected via the switch, but several things indicated something was wrong.

First of all, pinging the switch itself while plugged into it directly was yielding response times of 7–14ms. This may seem pretty fast, but an acceptable response time is more like 1ms, max.

Second, by looking at the error counters in the switch’s web interface, I noticed Rx errors piling up after only a few minutes:

Rx errors piling up after only a few minutes of traffic
Rx errors piling up after only a few minutes of traffic

An acceptable number of errors is zero, assuming there is no problem with the cables themselves.

All of this points towards some degradation that destroys performance when traffic increases past a certain point.

Finally, just for the heck of it, we opened up the switch’s casing and took a look at its innards.

The inside of the Netgear GS748T
The inside of the Netgear GS748T

The capacitors with flat tops (such as the group of four on the left) are in good shape, but the ones with bulging, rounded tops (there are three in this pic) have definitely gone bad. Hardware companies often try to save money by getting cheap, low-quality capacitors, and when they fail, they start to bulge like this.

The failed capacitors definitely seem to explain our problem. Personally, I would not have bothered unscrewing the casing on the failed switchNor would I have ordered the same make/model as a replacement. The “new” switches are a later revision than the originals, though (GS748Tv3H1 vs GS748Tv1H3) so hopefully that represents some improvement., but it was a good way to confirm that we were in fact dealing with a hardware failure. You might also want to do this if you ever order used network gear; if any of the capacitors are bulging like this you know to return the item immediately.


Shot in the Patoot

Scribbled  · PDF · ◊ Pollen source

happy birthday to former President James Garfield, mortally shot in the patoot
reminder that one time we shoved whiskey and beef bouillon up a president’s butt until he died nytimes.com/2006/07/25/hea…

(Solved) DNS_PROBE_FINISHED error, degraded internet performance

Scribbled  · PDF · ◊ Pollen source

Recently at the office we started having major network issues:

Troubleshooting

Google searches for the DNS_PROBE_FINISHED error invariably lead you to advice suggesting that you perform a netsh winsock reset and restart your computer. However this didn’t work in our case, unsurprisingly. The problem began affecting everyone at once, so unless there had been a bad Windows update or something (our IT support agency hadn’t heard of any) this would be unlikely to help.

We also ruled out the ISP as the cause. We have two WAN connections–one fiber and one cable–and switching to one or the other exclusively did not resolve the issue. Support tickets with ISPs confirmed there were no upstream connection or network problems.

Examining Switches

We had just that day moved a bunch of desks around one part of the office. Our IT support agency suggested we had some kind of switch-level spanning tree problem–a switch plugged into itself, perhaps, in some roundabout way. I tried rebooting the main switch used for non-VoIP traffic, and the problem immediately cleared up–for about ten minutes, and then it returned. We also tried disconnecting all the jacks for each person who had been affected by the move to rule out any subtle looping issues created (even though only one or two jacks had been affected); no dice.

I opened a support ticket with the switch company (Extreme Networks). They had me telnet into the switch and capture the output of a bunch of commands and send it to them, which allowed them to rule out any configuration or looping issues on the switch.

We upgraded the firmware, which dated from 2011, and restarted the switch. Again the problem cleared up and did not recur for the rest of the day. But by this point most people had gone home or to find somewhere else to work. I was curious if the problem would recur on Monday when everyone came back; sure enough, with 10 people in the office at 8:00 am Monday everything was fine, but by 8:30 we were having the same problem again.

At this point we were ready to try unplugging every person, port by port, waiting 5 seconds, and pinging google, to see if we could narrow the problem down to a particular network jack/user. Thankfully it didn’t come to that.

The culprit

This time on our firewall I noticed that the “connection count” was hovering close to or even above the stated maximum of 10,000. Occasionally the connection utilization would drop to 5–6% and then the problem would go away. I used the firewall’s “packet capture” interface to look at a few seconds’ worth of network traffic and noticed a high number of UDP packets coming from a particular LAN IP address, with sequential foreign destination IPs.

I was able to track down the computer with this IP address, it happened to be one of our sales people. The laptop was a Lenovo running Windows 8. In Task Manager I saw that it was sending 1.5 MBps over the wired Ethernet interface and 800–900 Kbps over the wireless interface, even with no apps running. (Task Manager did not show which process was casuing this.) Upon disconnecting the CAT5e cable the connection utilization on the firewall dropped to 40%. Disconnecting the wifi dropped it further to 7%.

By looking at the CPU usage it appears that the process discovery.exe was abnormally high. A Google search finally turned up this article: Excessive network traffic and wifi drops linked to LenovoEMC Storage connector, which stated:

Corporate networks or ISPs may detect an excessive amount of unusual network traffic coming from ThinkPad systems preloaded with Microsoft Windows 8.1. The network traffic may be interpreted as a network flood or denial-of-service attack. As a result, the system may become restricted on the network or the network may stop functioning normally.

“LenovoEMC Storage Connector” is preloaded on some ThinkPad models to help customers discover and connect to LenovoEMC storage devices on their network. The process causing the network flood is discovery.exe, which is a component of “LenovoEMC Storage Connector”.

Uninstalling the Lenovo EMC Storage Connector from the offending laptop finally fixed the issue.


Marked 2 Previews of Scrivener Files Coming Up Blank

Scribbled  · PDF · ◊ Pollen source

You’re supposed to be able to drag and drop a Scrivener file onto the Marked app icon in the dock and have Marked open a preview for you.

However, when I did this on one particular Scrivener file, the Marked preview came up blank–even though I had plenty of Markdown-formatted content project.

I finally found the solution, after checking other thingsIncluding whether “Include in compile” was turned on for each document within each document in the Scrivener project—if it’s unchecked, Marked does exclude it from the preview, even though no “compile” is actually taking place from Scrivener’s perspective.. When you create a new “blank” Scrivener project, it gives you a couple of default folders; one of those is named Drafts and contains a blank text document. That Drafts folder, it seems, is actually supposed to be the root folder of all your project’s content.

What I had done was create a folder outside the Drafts folder and create all of my content there. Marked will not preview content located outside the Drafts folder.

The solution is to rename the original Drafts to something else like Content, and create a separate Drafts folder under it. Then make sure everything you might want to preview in Marked is located under that Content folder.

I’m using the latest version of Marked 2 (2.4.10) and Scrivener (2.6)


How to embed and subset your own webfonts

Scribbled  · PDF · ◊ Pollen source

These days many of us use Typekit for serving web fonts; and should you need to create and serve them yourself, most of the advice out there will point you to use FontSquirrel’s well-known Webfont Generator. However, I’m a big fan of not relying on third-party sites and services if you can avoid it. What happens in five years when those sites are no longer available? It’s best to know how to do get the same result yourself using basic tools, and the result will often get you better performance.

Webfonts have gotten much easier to implement. Because nearly all browsers support WOFF, you no longer need to supply four different file formats for each font in order to be sure it will be supported on all browsers. You can simply convert any binary font file into Base64 encoding and embed the resulting text/data right into your CSS file. However, there are additional steps you should take in order to optimize the size and downloading of your webfonts.

Base64 Encoding

This part is easy. The command base64 filename will take any file and encode it. This works on Linux and Mac OS X.

With this info, we can create a quick shell script that will take any font file and convert it into a webfont embedded right in a snippet of CSS:

#!/bin/bash

fontName=$1
fontFace=$2

WOFF=`base64 $fontName`

echo @font-face {
echo font-family: \'$fontFace\'\;
echo font-weight: normal\;
echo font-style: normal\;
echo font-stretch: normal\;
echo src: url\(\'data:application/font-woff\;charset=utf-8\;base64,$WOFF\'\) format\(\'woff\'\)\;
echo }

Save this to a file called webfont-encode and make it executable:

chmod u+x ./webfont-encode

You can then use it to create the webfont from any font file and append it to your CSS file, like so:

./webfont-encode AlegreyaSans-Reg.otf alegreyasans-1 >> fonts.css

On OS X, you can also copy the result to the clipboard if you wish:

./webfont-encode AlegreyaSans-Reg.otf alegreyasans-1 | pbcopy

Most times you’ll need to do this four times for each typeface: once each for the regular, italic, bold, and bold italic versions of the font. For each one, edit the font-weight and font-style CSS attributes to reflect the actual attributes of that font.

Once you’ve done all that, you can include the above stylesheet in your HTML (make sure it comes before any other stylesheets) and reference the font in your other CSS styles.

Note: The original font file needs to have its “embeddable” flag set to 0x0000 or the font loading will fail in Internet Explorer (at least in versions 9 through 11). Other browsers do not seem to check for this value. If your font license allows embedding, you can find some more Python code for modifying the flag here: http://www.typophile.com/node/102671.

Subsetting your fonts

If you create your own webfonts using only the above steps, your files will be huge–likely around 1 MB per typeface once you include bold and italic versions. This is because most typefaces (the good ones, at least) include characters for Russian, Greek, Hebrew, Arabic, and many other character sets. You can speed up your site a lot by whittling each font down to only the characters you’re likely to need.

For this part I’m assuming you’re on OS X. You’re going to need Homebrew and Python installed.

First, install the fontforge python extensions with brew install fontforge. Then:

export PYTHONPATH=/usr/local/lib/python2.7/site-packages:$PYTHONPATH

(You’ll need to do this every time you open a new terminal unless you permanently add Homebrew’s package folder to your Python path.)

Next, get yourself a copy of glyphIgo. This lovely Python script can convert between TTF and OTF font formats, and subset fonts based on any character set. Alberto Pettarin has a great blog post explaining more about its use.

If you have git installed (brew install git), the following command will download a copy into a glyphIgo folder:

git clone https://github.com/pettarin/glyphIgo.git

Now we need to create the character set: a file containing each of the characters you want to include in your font. I wrote a quick Python script to assist with this, and have included codes several extra typographic symbols that I use often:

from __future__ import print_function

charsets = [
    [0x0020,0x007F],    # BASIC LATIN
    [0x00A1,0x00FF],    # PUNCTUATION, LATIN-1 SUPPLEMENT, COMMON SYMBOLS
    [0xFB00,0xFB04],    # STANDARD ENGLISH LIGATURES fi, fl, ffi, ffl
    [0x0100,0x017F],    # LATIN EXTENDED-A
    [0x0180,0x024F],    # LATIN EXTENDE0D-B
    [0x2010,0x2015],    # HYPHENS AND DASHES
    [0x2018,0x2019],    # LEFT/RIGHT SINGLE QUOTATION MARKS
    [0x201C,0x201D],    # LEFT/RIGHT DOUBLE QUOTATION MARKS
    [0x2026,0x2026],    # ELLIPSES
    [0x221E,0x221E],    # INFINITY SYMBOL
    [0x2190,0x2193],    # ARROWS
    [0x21A9,0x21A9],    # LEFTWARDS ARROW WITH HOOK
    [0x2761,0x2761],    # CURVED STEM PARAGRAPH SIGN ORNAMENT
    [0x2766,0x2767]     # FLORAL HEART, ROTATED FLORAL HEART
]

for charset in charsets:
    for x in range(charset[0],charset[1]+1):
        print(unichr(x).encode('utf-8'),end='')

Some notes on customizing this script: If you want to include any additional character blocks in your font, simply add the hex ranges to the charsets list (see the complete list of Unicode blocks). Single characters can be added by making the first and second numbers of the “range” identical.

Including the ligatures (if the font supports them) makes for a better result on some platforms. On Mac, for example, Safari will make use of the ligatures but Chrome doesn’t use them at all.

Save the above script as makeset.py and create your character set file like so:

python makeset.py > latin.set

Armed with our character set, you can now subset your font like so:

python glyphIgo.py subset -f AlegreyaSans-Regular.otf -p latin.set -o AS-R.otf

Now encode the new “minimized” font using the script we created above:

./webfont-encode AS-R.otf alegreyasans-1 >> fonts.css

Results

To get an idea of the size reduction, I used these methods to produce a single CSS file containing regular, italic, bold, and bold italic versions of two typefaces (a total of eight @font-faces).

Without subsetting the fonts, the resulting CSS file was 2.1 MB in size. When I subset the fonts before encoding them, the result was 611 KB in size, a 70% reduction.

For comparison, Typekit reports a size of 339K for a kit containing the same two typefaces, using the “Default” character set and without including OpenType features. However, I strongly suspect that this the compressed size of their kit. You can achieve the same result by enabling gzip compression on your web server. On my own server, according to checkgzipcompression.com, the 611 KB CSS file gets packed down to a 345 KB download—almost identical to the Typekit version.

I could probably save even more space by omitting the Latin Extended-A and Extended-B blocks in my character set.

Speeding it up even more

Once you have your fonts set the way you like them, you should do yourself a further favour and check out Adam Beres-Deak’s post Loading webfonts with high performance on responsive websites. Using the simple Javascript in his post, you can make your fonts load and perform much faster than they would if you were using Typekit or Google Fonts.