Linux on X1 Carbon Gen 2.

X1 Carbon Gen2 is the ideal portable machine.

I have dealt with laptops from asus, zareason, system76 and apple. Nothing even comes close to this thing. It is light, but not brittle, nothing feels like it is going to fall apart in four months.

Open the lid, press the power button and feel the smile spread across your face as you hear silence. If you are looking for screen real estate on the go, then this is the machine for you.

Also, the keyboard does not entirely suck.

Ubuntu 13.10

I am not a fan of ubuntu, but it makes a great canary. If a machine will work with ubuntu, then with a little bit of elbow grease you can probably get any other distro onto it. However, if the box fails with ubuntu, then there is no way you are getting another distro to work. This is the folk wisdom and my personal experience.

Ubuntu installs perfectly and painlessly with no problems. All the hardware just works.

  • wifi
  • ethernet
  • video
  • tracpad
  • audio

I was pleasantly surprised. I closed the lid, ate dinner and then came back.

When I opened the lid, nothing. It was like my beautiful new laptop turned into a drunk fratboy that didn’t want to wake up for class. Thus began my adventure down the power management rabbit hole.

The suspend duct tape.

tldr: Dont try. Change it to suspend to disk.

You are going to need more knowledge and patience than I have in order to get the x1 carbon gen 2 to suspend to ram. I changed kernels, disabled hardware, tweaked kernel params and trawled through lenovo, ubuntu and various armpits of the net to fix this thing.

The best info I found was from the kernel docs:

Going through these tests was how I found out that s2disk worked.

Great! but ubuntu disables hibernate because things.. mumble… scalable…. mumble mumble…. because…

To get hibernation on lid close means ignoring all the advice out there regarding policy kit. It does NOT work.

Edit /etc/systemd/logind.conf and set HandleLidSwitch=hibernate.


While dumping to disk is NOT ideal, it is better than the alternative of power cycling evertime you close the lid.

I hope that this will be fixed soon-ish.

The suspend issue is a minor annoyance. The number of pixels that this box delivers is worth every penny.

Fun with org mode and ssh


There are a lot of machines that I have ssh acces to; virtual machines (both in vmware and virtualbox), linodes (personal, work, projects), production systems, etc.. All told there are probably over 200 systems out there which I will need to ssh into at some point in the future. Sometimes I go years without logging in.

Some of these systems have domain names, but most are just random IPs over some vpn. I can never remember how to get in and have to spend time figuring out what box it is that I need to access

Failed Solutions


Of course I have ssh keys setup and I use different key pairs for different roles and uses. This has created a rather complex ~/.ssh/config.

I have tried to keep a list of systems in the config with an alias:

Host abc123
Port 1234
User username

Substitute sane values and about 200 of these and you rapidly create a large unmanageable mess.


Try to manage a large /etc/hosts file and you see first hand why dns was invented.

I even tried using scripts to pull information from puppet sources, linode api’s and even systems posting back on my public IP.

Obviously it became a mess

emacs to the rescue.

One day after being particularly frustrated when I couldn’t get into a box, I was adding yet another entry to ~/.ssh/config only to realize it was in there twice already, I decided to turn this problem into a nail so I could use my universal hammer, emacs.

(defun jgk/xterm-ssh (host)
  "Spawn a xterm with a ssh to the host"
  (start-process-shell-command "*org-xterm-ssh*" "ssh-xterm" "xterm" "-e"
                               (concat "'ssh -AY "  host "'")))

I now have an org file for each project I work on and can create links that will let me ssh into a system with a ‘click’

[[elisp:(jgk/xterm-ssh "username@")][Magical Mystery Machine]]

Since it is in org mode I can now categorize and tag these links. I can take notes and use tramp to open directories.

I am slowly adding systems in as I need them. I have found the ability to keep notes and a basic log of what and why I was there to be invaluable.

If you need to access a lot of random systems you should try this, it has made my life sane

Usability and Programming

The Problem

The holidays are coming and my wonderful wife, Adrienne, has started thinking about gifts for the family. One of the problems we have is that our entire life is digital, but it is not a promiscuous one. No facebook, flickr, twitter or any of the other numerous places on the net that give you a bit of convenience in return for turning you into a product. We are digital homebodies.

One of the inconvenient consequences of being a digital homebody is that grandparents do not have access to photos of their grand children. None of those services would allow us to share videos and photos with the analog generation either. So Adrienne decided to start creating a photobook on lulu.

She spent some time culling our giant photo library down to only about 80 images then tried to upload them to lulu. Only 40 of them made it.

Many years ago when I was a poor student wandering the country I was crashing at a friends house. She was trying to change a light bulb, with her room mate, in a chandelier (technically an electrolier) that was not built to have its light bulbs changed. Ladders, chairs, tables, screwdrivers, hammers and loud swearing were all part of this elaborate process. I was leaning against a wall watching with great amusement as this scene unfolded before me. Being simultaneously relieved and embarrassed at my lack of participation, I was befuddled when my friend thanked me for not interfering. Indicating that was an unusual but welcome trait.

The extreme juxtaposition between her perception of the situation and mine made that stay with me. Not only because it has often been my job to interfere when people are having problems, but because like so many geeks it is in my nature to solve problems wherever they occur.


Combine the accidental lesson learned so many years ago with the inherently tense situation caused when trying to help a frustrated person you can perhaps understand the trepidation I felt when I offered assistance to my wife.

Lulu’s book creation software is a giant flash monstrosity. As such it does not perform well on under powered ‘eco’ laptops. Another side effect is that all UI widgets must be non-native and enter into the uncanny valley. This combination makes all flash apps cumbersome at best and unusable at worst, lulu is no exception.

After trying to upload the images a couple times, which created duplicates of the same 40 I wondered if the images were corrupt. Investigation demonstrated that numerous image viewer apps could load them just fine without issue. Trying another book service, blurb, had the same problem, the same images did not get uploaded. What the hell was going on?

I can’t remember the exact moment I saw this, the file dialog box had ‘*.jpg, *.jpeg, *.png’ as a file filter. Inspecting the files we were trying to upload I noticed the following extensions, ‘*.jpg’, ‘*.JPG’. Spot the problem?


It took me a while. Longer than I care to admit.

The filesystem on my wife’s laptop is case sensitive. Unlike FAT32 where jpg is the same as JPG, jPg, Jpg, etc. jpg is NOT the same as JPG. In fact they are fundamentally different extensions.

With numerous cell phones and digital cameras all creating various file naming conventions one of them creates JPG instead of jpg. I have yet to track down the culprit.


Now that I have found the reason why half the images were not uploading. How to solve this? I couldn’t imagine going through 40+ files and renaming them individually. Being a professional geek I pulled out my trusty bash shell.

Being the good geek I write my bash one liners incrementally, so the first attempt is a simple loop:

for f in `find -name '*.JPG'`; do echo $f; done

As any seasoned bash scripter can tell, there is only one thing that could have made this approach untenable, spaces. Yes, spaces. For time immemorial I and every person who has ever written a shell script has despised spaces in file names. There are ways to deal with this, but it was approaching 11pm, I was tired and I just wanted to get this done.

For the past couple of weeks I had been steeped neck deep in erlang. While erlang is probably the last language one should think of when trying to solve this problem, I couldn’t help it. In less than a minute I whipped up the following:

#!/usr/bin/env escript
%%! -noshell -noinput
main(_) ->
    filelib:fold_files(".",".*\.JPG$", false,
                       fun(F,A) ->
                               Base = filename:basename(F),
                               Ext = filename:extension(F),

This worked!

It does have one minor bug that did not affect the desired outcome, e.g. changing extension from JPG to jpg. I will leave this as an exercise for the reader to find the bug.


When all 80 of the images were finally uploaded to lulu, my wife asked a very reasonable question: “And I was suppose to do that how?”

A very reasonable question.

  • you would first have to understand the quirks of whatever file system you are using.
  • you would have to notice that the file selection dialog box was not giving you the ‘All Files’ option.
  • you would then have to be able and/or willing to rename 40 files

I have been lucky enough to spend the last ten years creating and solving these problems. Since these problems are so prolific there is a pattern recognition machine in my head that can wade through the confusing errors and silent failures.

Software is incomprehensibly complex. No one has figured out how to manage that complexity yet. There are best practices, tools and methodologies available to mitigate the failures caused by software’s inherent complexity. However, they are all just that, a mitigation not a solution.

Given software’s predilection towards failure and its near ubiquity in modern life I am left to wonder how non-developers cope. My heart truly goes out to them.

exprecs, making json usable.

Erlang and Syntax

Many flames have been ignited over erlang’s syntax. Erlang as a system is exceptional; easy concurrency, clustering and OTP design principles to name just a few. However, its syntax has a lot to be desired.

There are minor annoyances, like the comma (“,”), period (“.”) and semicolon (“;”), which make diffs larger and harder to read than they should be, and cause irritating compile errors after a quick edit.

A simple contrived example:

add(First, Second) ->
    Result = First + Second.

Now if you want to store the result

add(First, Second) ->
    Result = First + Second,

And now there is a two line diff, instead of one:

--- add1.erl    2011-09-20 11:19:18.000000000 -0400
+++ add2.erl    2011-09-20 11:20:34.000000000 -0400
@@ -1,2 +1,3 @@
 add(First,Second) ->
-    Result = First + Second.
+    Result = First + Second,
+    store_and_return(Result).

This is a minor nuisance, but the number of times I have forgotten to change a period to a comma approaches infinity.


While I lack a rigorous statistical analysis, you would be hard pressed to find an erlang programmer who enjoys records. Records are essentially syntactic sugar on top of tagged tuples. This sugar does not taste good.

Defining records is easy and straight forward:

-record(poetry, {
          style  :: string(),
          line   :: string(),
          author :: string()

However, using them is another story.

vogon_example() ->
    #poetry{style = "vogon",
            line = "Oh freddled gruntbuggly/thy micturations are to me/As plurdled gabbleblotchits on a lurgid bee.",
            author = "Jeltz"

echo_poem(Poem = #poetry{}) ->
    io:format("~s~nby ~s",[Poem#poetry.line,]).

The need to specify the record type on a variable before using the element accessor can lead to some fairly ugly code

contrived(Collection) ->
    %% in R14 you do not need the parens

If the need to specify the record type was removed you could do

contrived(Collection) ->

Which looks much cleaner. However, ugly syntax is a trivial annoyance and is primarily a subjective aesthetic concern.

The need to specify the record type is a more pragmatic problem. Writing generic code that consumes records conforming to a pattern or interface is impossible.

While it is true that erlang:element/2 can be used to access records as tuples, the usability of named fields is lost. If the record definition is changed, your code that uses erlang:element/2 may break in interesting ways. (note: that is not a Good Thing[tm])

exprecs to the rescue

I stumbled onto an interesting chunk of code by Ulf Wiger called exprecs. Exprecs is a parse transform that allows you to work magic, freeing your code from the constraints of the erlang record while still maintaining the benefits derived from named fields.

At this point it may be beneficial for you to read over the exprecs edoc. To make things simple I have generated the edoc html. Go have a quick read, I will wait.

While it doesn’t remove the need to scatter # all over your code, exprecs does enable the ability to treat records as discoverable and fungible entities. This opens the door to more generic and reusable code.

The Problem

I have written a lot of erlang code dealing with json, primarily in REST interfaces built with webmachine. The json validation and parsing into erlang terms is made extremely easy thanks to mochijson2. mochijson2 has a lot of benefits: roundtrips are consistent, json strings are parsed to erlang binaries and a single line of code is generally all you need.

However, I do find the resulting erlang terms produced by mochijson2 to be confusing and difficult to remember. Recursing down a proplist tree is not my most favorite activity and one can easily get lost in large data structures. This makes changing the data structure difficult, error prone and tedious, even with good unit tests.

The appropriate representation of large or complex data structures in erlang is a record. Due to the problems outlined above, abstracting out the mochijson2 code to create a generic json to record parser is impossible.

This means that I found myself writing json to record parsers frequently. I was violating DRY and becoming more and more frustrated.

The Solution

Thanks to the awesomeness that is exprecs, I was able to write a module that would take the erlang terms produced by mochijson2:decode/1 and transform them into a record. The code can even roundtrip from a record to json.

I no longer have to write yet another proplist walker in order to get json into mnesia. I am quite excited about this.

The json_rec.erl module exports two functions; to_rec/3 and to_json/2. The following code is example usage to illustrate the interface:

store_vogon_json(HttpBody) ->
    Json = mochijson2:decode(HttpBody),
    Record = json_rec:to_rec(Json, vogon_model, vogon_model:new(<<"poetry">>)),

get_vogon(Author) ->
    Record = vogon_model:read(Author),
    json_rec:to_json(Record, vogon_model).

exprecs Explained

In order to give some example usage of exprecs, I am going to provide lots of contrived examples. If you want a real world use case, see the code for json_rec.

We have two modules, poetry.erl and book.erl that each have their own record defined in poetry.hrl and book.hrl

%% include/poetry.hrl
-record(poetry, {
          style      :: atom(),
          excerpt    :: string(),
          author     :: string(),
          available  :: boolean(),
          count      :: integer()

%% include/book.hrl
-record(book, {
          style      :: atom(),
          count      :: integer(),
          available  :: boolean(),
          pages      :: integer(),
          excerpt    :: string(),
          author     :: string()

Now you have a massive inefficient database of 100 book records and 100 poetry records. Someone has just snuck in and stolen your entire library and you are pedantic enough to want to update this fact.

Since the two records have an different number of fields and they are in a different order, using element/2 is not an option. This is where exprecs comes in.


First some basic housekeeping. The record needs to be ‘exported’ from a module


%% include the record definition or put it inline

-compile({parse_transform, exprecs}).

To make the above -compile work, exprecs.erl needs to be in your erlang path. For simplicity I have put exprecs.erl in a basic erlang app that all our erlang projects depend on, that way I am certain to have it available. ( I need a better way to do this besides having a ‘utils’/'misc’ app.)

-export_records is created by the exprecs parse transform. This is what generates and exports the funky ‘#get-’ functions and makes the records usable.

..and the same for the book.erl module

update function

Now we need to write a function that updates the count field to zero in all records, since our collection has been stolen.

-type count_record() :: #poetry{} | #book{}.
-spec reset_count(Module :: atom(), Record :: count_record() ) ->
                                   {error, string()} | {ok, count_record()}.
reset_count(Module, Record) ->
    %% crash if there is not a count field
    true = lists:member(count, Module:'#info-'(fields,Record)),

    %% get the count value by specifying the field we want. notice how
    %% there is no explicit mention of what record is being used. We
    %% just care that there is a count.
    case Module:'#get-'(count, Record) of
        0 ->
        _N ->
            Module:'#set-'([{count, 0}, {available, false}], Record)

In order to use this we write a simple loop over all books and poetry available, specifying the module and record.

reset_all() ->
    %% loop over all modules
    lists:foreach(fun(Module) ->
                          %% reset the count of all records in the module
                          lists:foreach(fun(Record) ->
                                                New = reset_count(Module, Record),
                                        end, Module:all())
                  end, [poetry, book]).

The bane of all example code is that bad code can be easier to read. I hope the above illustrates the benefit of exprecs, namely, that it opens the door to generic record-based code.

json_rec, a walk through

As with all code, there are quite few bits missing, namely internal documentation. It may prove difficult for others to hack on this until I get to that. The good news is that I have extensively documented the exported functions and even written an example model.

You can pull the current code from


The goal of json_rec is to take json and provide a record ultimately destined for some type of datastore; mnesia, riak, couch, etc.. As such json_rec assumes that you have a model for interacting with the store, e.g. standard MVC.

json_rec places a few very simple requirements on your model’s interface:

  • it MUST export new/1
  • it MUST export rec/1
  • it MUST export the exprecs transforms or the record.

At this point, if you have not read the exprecs edoc I highly recommend that you do.

Keeping with the above example, let’s make book.erl a json_rec compatible module.



-record(book, {
          style      :: atom(),
          count      :: integer(),
          available  :: boolean(),
          pages      :: integer(),
          excerpt    :: string(),
          author     :: string()

%% the exprecs export of the record interface
-compile({parse_transform, exprecs}).

%% here we provide a mapping of the json key to a record.
new(<<"book">>) ->

%% if the key is unknown, return undefined.
new(_RecName) ->

%% return true for the #book{} indicating that we support it.
rec(#book{}) -> true;
rec(_) -> false.

At this point we can take the following json and transform it into the #book{} record.

    "style": "fiction",
    "count": 1,
    "available": true,
    "pages": 42,
    "excerpt": "Good bye and thanks for all the fish.",
    "author":"Adams, Douglas"

We can get a #book{} record from the above with

-spec json_to_rec(Json :: string()) -> #book{}.
json_to_rec(Json) ->
    ErlJson = mochijson2:decode(Json),
    Record = book:new(<<"book">>),
    json_rec:to_rec(ErlJson, book, Record).

Other features

json_rec will try its best to transform into known records, e.g. ones exported from the module. However, if Module:new/1 returns ‘undefined’, then it will fall back to a proplist. The major downside of this is that it loses the clean round trip that mochijson2 gives you.

json_rec also supports nested records. Whenever a json dictionary key has a dictionary as a value json_rec will call Module:new/1 to determine if it is a known record type. If it is json_rec will create a record and make it the value of the parent record field.

json_rec supports a list of dictionaries as well.

In summary I have tried to support all reasonable data structure combinations. json_rec does a best effort to do what you expect. However, it is not an AI or Turing-complete so I am sure there are various combinations of lists and dicts that will not work.


json_rec is an 80% solution that has saved me a ton of copy/paste coding. I have found it extremely useful in saving my sanity when transforming json into useable data.

I would like to thank Ulf Wiger for creating exprecs, making json_rec possible. Updates for 2011-03-30

Entertaining the kid

I have a 15 month old daughter that simply loves to play on the laptop. Pressing random keys until I notice. Because of this I have gotten into the habit of closing my laptop while not using it. This has saved me from sending lots of random messages to random people.

Closing and opening my laptop all day is quite tedious. The best way to resolve this problem is to place a greater temptation in front of her. The only quesiton is what is that?

I found an old netbook laying around that I haven’t used in months. So I wrote a game.

I have been meaning to play with pygame for a while now and this gave me the perfect excuse to do so. The ‘requirements’ for the game were:

  • it must be simple enough to do in about an hour
  • it should do something on any input
    • the input is any keypress.
  • it should entertain a 15 month old toddler


  • display an image
  • hide that image behind a grid of 4×4 black rectangles
  • on any key press remove a random rectangle
    • this causes a section of the hidden image to be displayed
  • if the entire image is shown, next keypress will cycle to the next image.
  • repeat

The above is extremely rudimentary. However, the audience has not acquired sophisticated tastes as of yet.

I wrote the pygame app last night and tested it out this afternoon. She loves it. She now has her very own laptop. She can bash the hell out of the keys and mommy and daddy do not stop her. Also, something happens on the screen too! Most exciting ;)

Usage is simple:

  1. mkdir -p ~/.phoebe/images
  2. put as many images as you want in the above dir
  3. python

Problems and Known limitiations:

  • I wrote this in an hour while learning pygame
  • I assume a lot and if you aren’t on the latest ubuntu w/ pygame it will probably throw exceptions and crash. Sorry
  • I did get it working on an old debian install with python2.5, see comment in World.__init__

If I do more with this I will throw it up on github.

Here is the python source:

#!/usr/bin/env python

import os
import sys
import glob
import itertools
import random
import datetime
import pygame
from pygame.locals import *

world = None

def init():

    global world
    world = World()

class World:
    def __init__(self):
        # this might not be available in older versions of python. If
        # that is the case, simply hard code the resolution to your
        # screen size.
        vid_info = pygame.display.Info()
        self.size = self.width,self.height = vid_info.current_w/2,vid_info.current_h/2

        self.screen = pygame.display.set_mode(self.size)

        images_files = glob.glob(os.environ["HOME"]+"/.phoebe/images/*.jpg")

        self.images = itertools.cycle(images_files)
        self.image = None

        self.grid_size = 4

        self.masks = self._get_masks()

        self._to_blit = []


    def _get_masks(self):
        black_mask = pygame.Surface((self.width/self.grid_size, self.height/self.grid_size))

        rv = []
        for y in range(0,self.grid_size):
            ay = y * black_mask.get_height()
            for x in range(0,self.grid_size):
                ax = x * black_mask.get_width()
        return rv

    def _next_image(self):
        img =
        img_surface = pygame.image.load(img).convert()

        self.image = (pygame.transform.scale(img_surface,self.size),(0,0))

    def next(self):

        self._to_blit = []

        if not len(self.masks) or not self.image:
            #no more masks, setup the next image
            self.masks = self._get_masks()

            # we have masks remove on
            if len(self.masks) > 1:
                #if there is only one mask left, then range is 0 to 0, and randrange complains
                to_r = random.randrange(0,len(self.masks)-1)
                #this is the last mask,

        #put the image into the blit list
        self._to_blit += self.masks

    def blit(self):
        screen = pygame.display.get_surface()
        for s,p in self._to_blit:

def main():
    global world


    clock = pygame.time.Clock()

    done = False
    while not done:




        for event in pygame.event.get():
            if event.type == pygame.QUIT:
                done = True
            elif event.type == pygame.KEYDOWN:
                if event.key == pygame.K_ESCAPE:
                    done = True

    return 0

if __name__ == "__main__":
    sys.exit(main()) Updates for 2011-01-20

  • @trashbird1240 ha! I saw that conversation. I was entertained #
  • @tius that just went into ~/.emacs thanks #
  • finally made it to a LUG. been a while #

OrgMode and Firefox conversations

I like to use OrgMode because I feel organized and in control. That is definitely an illusion, but it is a nice one.

I am still using firefox to browse the web. Mainly because w3m doesn’t work well at all outside of plain text pages and ezbl is not yet ready. This means I am generating a lot of information; history, tabs, bookmarks that are outside of my massive org collection of stuff.

When I find something I would like to read, but don’t have time for at that moment, I would like to store it in orgmode. However, copy/paste the url+title got really tedious. As with anything tedious I tend not to do it. So my firefox ‘to read later’ bookmark folder got full and without a way to prioritize them it became a morass of unused links.

Wouldn’t it be nice if emacs could pull the current url and title from firefox and put that into org for me?

Queue MozRepl stage right

MozRepl gives you a simple line oriented repl into the entire browser context. You could even use telnet if you really wanted to.

So I hacked up some quick primitive functions to ask MozRepl for the current url and title.

The function that will send a string to Firefox. The repl MUST be started already which you can do with M-x run-mozilla

(require 'moz)

(defun jk/moz-get (attr)
  (comint-send-string (inferior-moz-process) attr)
  ;; try to give the repl a chance to respond
  (sleep-for 0 100))

The sleep-for function is there because there is no flow control on the mozrepl. If you call jk/moz-get in succession too quickly you will get whatever the last result was multiple times.

Get the current url:

(defun jk/moz-get-current-url ()
  (jk/moz-get "repl._workContext.content.location.href"))

Get the current title:

(defun jk/moz-get-current-title ()
  (jk/moz-get "repl._workContext.content.document.title"))

The repl buffer will now have the results printed as a string. Not useful. So we work a little buffer walking magic that makes too many assumptions.

(defun jk/moz-get-current (moz-fun)
  (funcall moz-fun)
  ;; doesn't work if repl takes too long to output string
                  (set-buffer (process-buffer (inferior-moz-process)))
                  (goto-char (point-max))
                  (setq jk/moz-current (buffer-substring-no-properties
                                        (+ (point-at-bol) (length moz-repl-name) 3)
                                        (- (point-at-eol) 1))))
  (message "%s" jk/moz-current)

This simply calls whatever func is passed in as moz-fun and grabs the last output line of the moz repl.

The last bit for getting what we want is to put all the above together into a couple convenience functions:

(defun jk/moz-url ()
  (jk/moz-get-current 'jk/moz-get-current-url)

(defun jk/moz-title ()
  (jk/moz-get-current 'jk/moz-get-current-title)

Now we can run (jk/moz-url) and it returns the url in whatever window/tab is currently active, same for (jk/moz-title)

Last step is to get this into org via org-capture using org-capture-templates.

(setq org-capture-templates
      '(("t" "Todo" entry (file+headline "~/org/" "Tasks")
         "* TODO %?\n %i\n %a")
        ("n" "Notes" entry (file+datetree "~/org/")
         "* %?\nEntered on %U\n %i\n %a")
        ("b" "Bookmark" entry (file+datetree "~/org/")
         "* %(concat \"[[\" (jk/moz-url) \"][\" (jk/moz-title) \"]]\")\n Entered on %U\n")

The orgmode capture template has %(sexp) so you can run arbitrary elisp code to generate the content of the template.

Now all I have to do is C-c c b C-c C-c and I have a ‘bookmark’ in org. I am finding this to be very useful.

dot emacs conf00.d

Earlier I gave a brief overview of how my config file is loaded. The first directory loaded is ~/.emacs.lisp/conf00.d/ Where we find the following files:

  • bbdb.el
  • dired.el
  • misc.el
  • paren.el
  • ui.el
  • uniquify.el


I do not have anything particularly interesting or fancy here. It loads bbdb and adds it to a hook.

(add-to-list 'load-path "/usr/share/emacs/site-lisp/bbdb")
(add-to-list 'load-path "/usr/share/emacs/site-lisp/bbdb/bits")
(load "bbdb-autoloads")
(add-hook 'gnus-startup-hook 'bbdb-insinuate-gnus)

The only thing of note is the gnus startup hook. This lets me tab complete email addresses in gnus, as long as they are in bbdb. Unfortunately, that is all there is here.

I would like any addresses I send emails to be entered automatically in bbdb with an ‘auto’ tag. However, I have been too lazy to look that up.

I would like to ‘reverse insinuate’ my jabber rosters. Grab jids and info from the roster+vcard and place it in bbdb. That requires more ‘magic’ than I know how todo now.


I find myself using C-x d more than I ever thought I would. Combined with C-s it is a great way to quickly search for what you need.

(eval-after-load "gnus"
     (require 'gnus-dired)
     (add-hook 'dired-mode-hook 'turn-on-gnus-dired-mode)
     (define-key dired-mode-map "a" 'gnus-dired-attach)

     (add-hook 'dired-mode-hook
               (lambda ()
                 (define-key dired-mode-map "\C-xm" 'jgk/dired-w3m-find-file)))))

(defun jgk/dired-w3m-find-file ()
  (require 'w3m)
  (let ((file (dired-get-filename)))
    (if (y-or-n-p (format "Open 'w3m' %s " (file-name-nondirectory file)))
        (w3m-find-file file))))

(defun jgk/w3m-browse-current-buffer ()
  (let ((filename (concat (make-temp-file "w3m-") ".html")))
          (write-region (point-min) (point-max) filename)
          (w3m-find-file filename))
      (delete-file filename))))

This evals a few functions after gnus is loaded. The only feature in here which I still use occasionally is attaching from a dired buffer. Everything else is not exactly useful to be honest. (need to add to my todo)


This file is a collection of random stuff that does not configure a package or is a simple setting tweaks. Since the code is unrelated I will take it line by line.

(global-font-lock-mode t)

Turns on font lock so things are pretty.

(fset 'yes-or-no-p 'y-or-n-p)

This is very handy. It changes the yes/no to y/n. For every prompt that requires a yes or no input it becomes a single key stroke instead of typing out a whole word.

(require 'mwheel)

Simply turn on scrolling with the mouse wheel.

(setq next-line-add-newline nil
      require-final-newline t
      use-file-dialog nil
      use-dialog-box nil
      transient-mark-mode t)
  • next-line-add-newline: stop adding newlines when you move the cursor, only do it on enter
  • require-final-newline: always add a newline at the end of file, I find a lot of programs out there need this
  • use-file-dialog & use-dialog-box: disable dialog boxes!
  • transient-mark-mode: highlight active regions
(setq compilation-scroll-output t)

Automatically scroll output from compilation commands. I got this from

(push '("." . "/home/justin/.emacs.backup") backup-directory-alist)

I got really sick of have ~ files all over the place and adding them to the ignore filters for various vcs (git,hg,etc..) This puts them all into one place. The files that are created are unique since they have the full path in them, but with ! instead of /


Enables paren-mode which makes the parens fade into the background and gives you great visual feedback as to where you are in the expressions.

(show-paren-mode t)
(setq show-paren-style 'expression)


I like a clean UI. I never use the toolbars or menus and the scroll bar is simply taking up space. The following turns them all off.

(if (fboundp 'tool-bar-mode) (tool-bar-mode -1))
(if (fboundp 'menu-bar-mode) (menu-bar-mode -1))
(if (fboundp 'scroll-bar-mode) (scroll-bar-mode -1))


I often find myself working in multiple projects within the same emacs session. In fact I really only have a single emacs session for everything. uniquify makes the buffer names unique based on location. So when you have two files open of the same name, uniquify will walk up the directory tree until it finds the unique part of the path, it will then change the buffer name to be unique. This is simply awesome.

;; from
(require 'uniquify)
(setq uniquify-buffer-name-style 'reverse
      uniquify-separator "/"
      uniquify-after-kill-buffer-p t ;; rename after killing uniquified
      uniquify-ignore-buffers-re "^\\*") ; don't muck with special buffers

The above settings should be obvious. I have / as the separator, all uniqified buffers are redone when one is killed and the scratch-like buffers are ignored.

That is my first ‘runlevel’ for emacs. Not much here so I took it all in one go. Any tips or improvments are most welcome.


I have been using emacs for a while now. I would guess about five years. I would classify myself as a power user and not really a true emacs user. To be a ‘true’ emacs user you should at least be able to write elisp and generally hack emacs. Instead, I rely very heavily on existing elisp code and blogs. Of course, you can not mention emacs without pointing to

Writing about your dot emacs has become all the rage lately and I figured I would join in the fun. By writing about this I hope to do a few things;

  • filter out the junk in my dot emacs
  • get tips and/or ideas about what I can do better
  • finally get the motivation to write some elisp

The ~.emacs

(setq my-init-dir "~/.emacs.lisp"
      my-pkg-dir (concat my-init-dir "/pkg")
      my-site-dir (concat my-pkg-dir "/share/emacs/site-lisp"))
(load "~/.emacs.lisp/init.el")

And the ~/.emacs.lisp/init.el

(add-to-list 'load-path (expand-file-name "~/.emacs.lisp"))
(add-to-list 'load-path (expand-file-name (concat my-init-dir "/pkg")))
(load "safe-load")

(dolist (file  (reverse
                (mapcar 'file-name-sans-extension
                         (concat my-init-dir "/conf[0-9][0-9].d/[a-z-]*.el")))))
  (safe-load (expand-file-name file)))

(safe-load "keys.el")

I can not recall where I got this idea from, but it has served me well so far. As any *nix user will note, this is inspired by the rc.d runlevel concept. I can have conf00.d to conf99.d as sub directories to the main .emacs.lisp directory. This allows for relatively easy control of ‘boot’ order of various modes,apps,etc…

I have only been bitten by the implicit boot order a couple of times. Currently, I only have 0-3 and I am just thinking about adding a 4.

One thing to note is the keys.el file is at the bottom, I wanted a single place to find all my global key bindings.

In later posts I will go over what is hiding in the conf00.d-conf03.d directories, but for now I will move on.

Here are some stats:

  • 36 elisp files
  • 854 loc
  • 73 packages


I have still not found a way to deal with packages well at all. As the stat above shows, I have 73 of something via ls -l|wc -l.

I decided to put all pkgs under ~/.emacs.lisp/pkg Normally I just grab a release tarbal and extract it there. Then have the elisp file under conf??.d add the path to the load-path list and its working. Sometimes I do grab a clone of a repos and dump it in there too.

This works out ok, except that when the pkg needs to ./configure the parameters can get quite hairy, keeping the ‘install’ under ~/.emacs.lisp/pkg is messy then.


Of course I keep the entire ~/.emacs.lisp dir under hg, pushed to a remote server. This way I can pull onto my laptop or anyother random *nix and get my exact environment.

Does this work in practice? Not really.

Often times the ~/.emacs.lisp gets too far out of sync and I have to choose what to blow away. (I can be lazy with horrible merges.) Also having clones of other repos in ~/.emacs.lisp/pkg means I have to delete their .git or .hg and lose all of that info. (No I am not going to commit those dirs). Also, I get lazy and forget to commit and/or push and when I pull everything is ancient.

I like it, kind of

There are downsides to this method, finding where you configured something can be a pain, grep’ing files all the time does become tedious.

I would say it is a win so far. At least I have not discovered a better method for managing a huge emacs config.

Ideas welcome.

Audio desires

Current Situation

I have a significant lack of audio options in my house and I love listening to music. I spend most of my time in my office which does have good audio. However, when I am in the rest of the house the options are severely limited.

The wife and I both have laptops downstairs in the living room. On occasion we move them about when needed. My son has an iMac in the
dining room. The iMac is our primary source of audio.

This situation can run into conflicts. When Freeman is playing stupid-flash-games[tm] I do NOT want the blips n bleeps injected haphazardly into my Bach.

The environment

I do not have a McMansion. In fact my house was built in the late late 19th century. This means that I have no idea what is in the walls, or how difficult it will be to run wires all over the place without making my house look like a mad scientist laboratory. This means that wireless is pretty much the only option for connectivity.

I have three rooms downstairs where I would like to play audio on the first floor. I would like to install this setup anywhere though. Listening to music while in the shower would be nice.

1st floor

2nd floor

What exists now.

All infrastructure is in the office on the 2nd floor.

I have a 3TB nas that is plugged into the wired network and is available over wifi. My wifi is an ancient vanilla WRT54GL. Signal reception is pretty good across the entire house. The laptops can play music off of the nfs share without any noticable problems.

I would like to upgrade the WRT54GL to 802.11n, but that thing is stable as hell and I am reluctant to touch it.

I would also like to upgrade the wired network from 10/100Mbps to GigE, but that is for other reasons. (running vmware instances directly off the nas.)

The only thing out of the ordinary which might be notable is that we do not have a TV or audio system of any kind. The only electronics that we have are:

  • 5x laptops
  • 3x rackmounted computers
  • 1x iMac
  • 1x 3TB rack mounted nas
  • 1x n900,n810
  • 2x standard lowend cell phones

All computers are running either gentoo or ubuntu with the singular exception of the iMac of course.

What I would like.

I would like to have a set of speakers in the following rooms:

  • kitchen
  • dining room
  • living room

If the system is cheap enough, I would like to add it to

  • bath
  • bed rooms
  • office


Ideally, I would like a plug n play device. I can deal with a manual config system; ssh, http, config files on sd cards, whatever… As long as I don’t flip bits with toggle switches and patching kernel code, I am ok with it. (I should also add that I would prefer to not sift through gigs of forum posts to figure out how to get the device to work.)

To continue with my hardware minimilization things that it will need:

  • power
  • audio out (preferable ‘standard’)
  • wifi (ethernet a plus)

(the audio processing + cpu, etc… is implied).

As I have looked around I see a lot of these systems have ‘optical audio’ yet I am unable to find speakers that have ‘optical’ anything. I have come to the conclusion that you will then need some kind of receiver that will take the ‘optical’ signals and transcode them into analog over the wire signal for speakers. While it might be great, that is too much equipment to slap in 4-5 rooms.

So I want a small simple box I can plug speakers into. Power + box + speakers.


So this magical box has to do something. I don’t need it to store anything. And I don’t need it to organize anything. All I need this box to do is take audio data and make the speakers sound pretty.

It could mount the shares via nfs,ftp,smb,etc… and keep some kind of continously updated catalog. This would mean some kind of control interface. I would prefer to avoid a broken and limited control interface.

It could also just listen on a stream and play whatever is shove down that, e.g. radio station lists per room/device.

Ideally the control interface would be 1) hackable 2) http based. (note: http != html)

Am I mud yet?

So yes, is that all clear as mud? I want to avoid some proprietary crap device that will die in two years. Then I will be left with having to jailbreak it so it will work with the next codec.

No, I do not want to deal with

Cooperative for a tablet?


I stumbled onto a post about android which provides a contrarian view on the ‘fragmentation of android’. Most pundits are pointing out how fragmented the android platform has become and how that is a weakness when compared to the unified front of iOS. The android app market has also been dubbed ‘The Wild West’ of app stores. Of course whether you view the fragmentation/wild west nature of android as a net gain or loss is up to where you sit on the consumer ladder.

I am familiar with Neuros Technology because I purchased their first audio player way back when it was a giant brick that looked like it was designed in a ’70s disco club. While the thing was hideous, it worked and it was open. Open to the point where a community formed around it for a while. I believe it is now officially part of the internet’s great bitbucket.

My point that I am slowly getting to is that I have dealt with these people before and I have high respect for them.

If you haven’t clicked over and spent time reading that post, please do. I hope you will come to similar conclusions as I have. Don’t worry I will wait.

(cue hold music)

Pretty awesome insight wasn’t it? I love it when people share their experiences from deep in the trenches of some opaque industrial mud pit. It saves me the pain and sweat of stumbling around in the dark.

Slightly More Background

I want an iPad. Well, not really. I want my phone to have a larger screen. But I want it to fit in my pocket too. I also don’t want to deal with Apple. While I know lots of people who love them and my son uses an iMac, I can’t bring myself to buy into DRM hell, no matter how sweet the siren song.

Things I would like from a tablet-ish thing:

  • sync time with ntp, (not require communication to a mothership.)
  • get accurate wifi information
  • be able to install stupid things my son or I hack up
  • irrevocably break it, then boot from a SD card
  • run emacs
  • ssh into the device and have that be useful
  • have all the cool almost useful abilities of a tablet
    • pass it over to wife on couch
    • read stuff in random places
    • do misc time wasting tasks
    • check communications (im,email,etc…) without having to boot laptop

Basically, I want a sane linux distro on a tabletish thing that doesn’t suck. Maemo is a great example of what I want. I have yet to try Meego so I can’t comment on that. iirc, they switched to rpm.

Less Background, More Point

I want to start a loose knit cooperative that will build/custom order a tablet from one of these electronics shops over seas. Since they apparently have very flexible designs and relatively low costs.

I know about the utter failure of the Crunchpad. However, the key difference here is that this will not be a business. More of a pool money together, create a spec via some complicated voting mechanism, get the minimal order quantities, etc…

This is a very unfinished thought and I am not in familiar territory.

I think this could be successful. More so than the other open hardware designs. Because the goal is not to sell as many units as possible. The goal is to buy as few units as possible to get the minimal order of the hardware design shop overseas.

If anyone decides to run with this, please let me know.


A couple weekends ago I attended the CPOSC in Harrisburg,PA. The home of 3 mile island. This was a five hour drive for me,  so I was uncertain at first whether I should go. However, Ontario Linux Fest was canceled and there was a talk on Puppet. It is risky to bet an entire conference on just one talk, I took it anyway.

Was it worth it?

We want to start using puppet at work. It appears to be the best tool for infrastructure/system management. Maybe I will go into all the details as to why we chose puppet (extensive research.) in a later post. (/me adds to org file). So the opportunity to talk to someone who has actually used puppet was too good to pass up.

I got two big lessons from Bill Hathaway’s presentation; start small and puppet does not do application orchestration.

Start Small

You should start with managing a small insignificant file, such as /etc/motd. When you have this understood and working, then you add a bit more to what puppet manages for you. Eventually, puppet will be managing your systems. If you try to do too much too quick, there will be mistakes and probably lots of swearing.

No Orchestras

The biggest shock was there is a difference to what puppet does and “application orchestration.”. I am a systems newb so perhaps this is no surprise to anyone else. However, the idea that something to manage your system shouldn’t be doing the upgrade orchestration was not expected.

For the other newbs out there: Application Orchestration means that you have App A running on Box 1 and App B running on Box 2. They talk to each other and the upgrades to both have to be coordinated. Meaning that if A is running 0.2 and B is still on 0.1 you are in a world of hurt.

Apparently, the solution to this is The Marionette Collective. The requirements for mcollective are also surprising (I shouldn’t be surprised since I knew nothing going into this.). Requiring a amqp server to broadcast ‘commands’ out to your servers seems a bit too complicated.

I need to get grounded on this whole process, but at least I have a more appropriate path forward than I did before.

Of course the other talks were awesome too. A few highlights:

  • Walt Mankowski gave a talk on perl one-liners. He is the first person to ever make me want to learn perl. I still probably won’t but there was a definite tinge language envy. Python and Erlang can’t do that.
  • Tom Clark gave an excellent walk through of twisted. It is one of those frameworks that promises to make things simple. Yet whenever I try to read other twisted apps I feel like a rat caught in a mad scientists maze, overdosed on psychotropics. This gave me a bit more solid footing. Not sure if that solid footing is an illusion from the psychotropic induced haze or something real.
  • I am not an iOS dev. No do I really want to be one. One question that has always interested me about these platforms is: how the hell do you make testing scale? While applicable to any gui, it is quite challenging on iOS simulators.Sikuli was a pleasant surprise here. It is extreme alpha software and should be treated as such, i.e. operate it with a long stick and have some solid cover nearby so you can survive the inevitable explosion.


    1. don’t use the IDE
    2. don’t use the IDE

When I look back, CPOSC is a win. It is well organized and attracts a pretty good high quality crowd. It is going on my calendar for next year.

Too many options

I have started a new job at the awesome Voalte. As a quick overview we do hospital voice, alarm and text (thus the VoAlTe) using ejabberd, freeswitch,
nitrogen, etc, on the backend and iphone on the frontend. It is a slick solution to the communication problem that faces hospital staff.

I have *just* started this week. I am getting acquainted with everything and trying to absorb the code and culture as best I can. I will try to refrain from speaking directly about it for a while, since I am new. Perhaps later I will talk about my experiences in more detail.

I have volunteered to research and implement a configuration management solution. The goal is to provide precise control over deploying and configuring thousands of computers on disparate networks. This is quite a challenge and it is somewhat difficult to find a tool that meets all of our requirements.

I have looked extensively at puppet, chef, spacewalk and cfengine. They all have great features. However, as with anything there are trade offs.

I will say that it is great to have new and interesting challenges so quickly.

Fail Whale

Edit: Apparently, this was announced last year! I guess I use
apps that are as out of the loop as I am.

Perhaps I am not plugged in enough. A few days ago all my twitter apps
stopped functioning. Twitter is not an integral part of my life and so I
figured it was something that would be corrected soon. As the hours
stretched into days I decided to look into the problem.

What I found is that the twitter API has killed off basic auth in favor
of oauth. I can understand the decision to move from basic auth. I can
understand the need to make everyone use oauth. What I can not
understand is the short notification period.

Like I said, I am not a plugged-in kind of person. At least not into
these new fangled social networks and life streaming systems. However,
as a user of a service what I can not understand is the extremely short
deprecation period that twitter gave app developers.

From my quick perusal of the announcement list app developers were given
a 15 day warning. Yes, 15 days to rewrite your app and distribute the
update to all of your users. Does not make much sense to me. Unless you
are actively trying to kill off your ecosystem.

Perhaps there is an earlier announcement that I was unable to
find. Regardless, all of the twitter apps that I use are currently
nonfunctioning with no updates that I can see. Honestly, I do not hold
the developers responsible for this. Two weeks is simply not enough time
to change such a fundamental component of your app. Especially from
something as simple as basic auth to something as complicated as oauth.

This is the problem with using a platform owned by a single
company. They have complete control over the systems, as they should,
and follow only their interests. While it is convenient that the
company’s interests often align with that of their users, it is not
guaranteed to be the case. Facebook is another example of this.

So twitter, it was fun. You were able to distract me on occasion. Some
useful bits of data came my way. You are definitely not worth the hassle
though. So I say good bye.

Identica, looks like you won :)

Think before writing

I need to start thinking before I write. Or at least verify my assertions.

The other day I posted about the n900 and droid root exploit. Well, as Brenton pointed out you can get dev versions of droids that have root without needing to resort to bit twiddling boot loaders. I foolishly assumed that since someone did it, that it was necessary to do. sigh

Then I assumed the drivers for wifi and cell radio were closed binary drivers. Felipe Contreras quickly corrected me on that assertion.

In a previous post I jumped too far ahead of myself and was promptly thwapped by rtaycher.

My mindless ramblings make me look like a fool. However, the upside is that I learn a lot of new info by being corrected in my false reasoning. Which I love. The downside is that by being incorrect all the time, people may stop listening. I suppose as with most things in life, I just need to find the balance to create that elusive moderation.

why the n900

I received a n900 last week and it is several levels of awesome.

I do not really consider myself a gadget geek. My wife might disagree with that assertion due to all the devices littered around the house. However, my previous cell phone was almost ten years old. Thats right, I bought my last phone at the turn of the century. I would say that fact alone removes me from the gadget geek school.

That ancient phone and the n810 were a really nice combination. I could access the net via DUN over bluetooth (http, ssh, etc..) and make phone calls. I wanted to merge the two devices and I knew the n900 would do that. That ability alone made the n900 worth waiting almost a year for. (That was when I started getting sick of having two devices.)

While the n900 is a spectacular device it definitely is not the shiniest. Google and Apple have that covered. With that in mind I keep getting asked why spend more for less? Why not just get a droid or iphone with access to all the apps, multi-touch, etc.. etc..

You will never see that on the n900. Not because the devs at nokia are security geniuses, nor because the maemo community lacks hackers. That post will never be needed for the simple reason that there is a rootsh in the apps repository. Yes, that is right. My phone comes with immediate and easy access to root.

I am definitely NOT a linux geek, nor an adept hacker. I will probably never use rootsh, but other people will and I can benefit from their efforts. As I discovered with the n810 there will eventually be a need to get access to root. And when some hacker creates /sbin/butterfly they will do it without having to bit twiddle the boot loader.

The n900 is a great compromise between the draconian iphone and the loose freerunner. It has its binary drivers and inaccessible hardware (you probably can’t hack the wifi or cell radio easily.), but offers an open debian based linux distro.

One last point that probably needs yet another post, while the droid is open-ish when compared to the iphone, it is still just another way for google to get ads in front of you. While this is not inherently bad, I am not comfortable paying someone so they can market to me.

Searching for the internet’s tea person

For many years I have been making really great coffee. Everyone who drinks it is amazed at how good the coffee is. While I would love to claim that I am some kind of coffee prodigy, I simply follow Tom at Sweet Marias. The guy is pure passion. All his product recommendations and bean selections have been amazing.

Sweet Marias has a personality too. It is not some slick rounded corner corporate site trimmed down to a bland stub by lawyers. It finds that hard to reach middle above the myspace eyesore. Tom appears to be on this constant search and is tirelessly hunting for good coffee. It is not just the beans, he finds the best tools too. Pure awesome.

I hereby dub Tom of Sweet Marias the internet’s coffee person.

Now I have decided that I like tea. I am starting to like tea enough to invest in making good tea. However, I have run into a road block: I am unable to find the Sweet Marias of tea. I have found Adagio’s and other misc places, but I am not getting the same passion from them. They all look and read too polished to be run by people who really care. Don’t get me wrong, I am sure Adagio and friends are really great people, I am just not seeing the same single minded awesomeness I see at Sweet Marias (If I don’t get any real response from this post I am probably going with Adagio).

Perhaps I am making too brash an assumption that the coffee and tea culture would be similar enough to have Tom’s doppelg√§nger.

I am putting this out there to see if anyone is aware of where I can go to get access to a singularly and freakishly awesome tea guru?

Random Observation

I noticed something the other day. It was surprising at first, then the more I thought about it the more I realized it made sense.

You have classic works like Don Quixote that have created iconic cultural terms; "I felt like I was tilting at windmills." There is big-endian vs little-endian created by Gulliver’s travels and popularized by computer culture. There are many more but since this a random musing I can’t think of them.

The surprising thing is that you have a massive thousand page work and what we have left from it is "Tilting at windmills.". I am not aware of any other popular phrase derived from the knight. I am reading Gulliver’s Travels and all that is culturally familiar is the little vs big satire.

Both of these memes occur rather quickly in the works. You don’t have to read all that much in order to stumble onto them. Is this common? Do so few people read the entire work?

Don Quixote has the cave of Montecino(sp?) and Gulliver’s Travels has the floating island kingdom of inept academic fools (which is hilarious.), to name just two off the top of my head.

The real question is: Are there any iconic cultural memes that occur towards the end of the works?

I am afraid we only enrich our culture with the first 25% of great ideas.

There are reasons markets are under served

Things I knew before I started, but ignored: There is usually a reason that markets are under served.

Here is some background before I get to the real story.

I am somewhat involved in boy scouts, my son is a Webelo II and I volunteer for that I can. Last summer I volunteered to coordinate summer camp. This boils down to collecting forms, having parents fill them out, copying the info to different forms, handing forms in to the main office, hoping and praying that they don’t lose track of it,
keeping copies and then transcribing yet more data. All of this is done via paper and pen. I have never spilled so much ink in my life. Keeping track of so much paper and who has done what was a nightmare of epic proportions for someone like me. Other people’s money was involved in a mixture of checks and cash all with different amounts
due and constantly changing based on a mind boggling laundry list of variables. If ever there was a process ripe for automation, this was it.

I thought I found an itch I wanted to scratch.

The ephemeral goal was to provide a basecamp on steroids for scouts to organize themselves. I wanted something less complicated than BigTent, but a bit more custom tailored to scouting than a generic group org system would be.

I am quite satisfied at my current job. I wanted to solve a problem, not alter my life. I quickly realized that I needed to do two things; find a designer/usability guru and figure out if this was going to be a viable project. In other words, will this eventually pay for itself?

I got my friend Brenton Klik to sign on and together we did some research. I try to use conservative numbers, but when they become too bleak I shift to conventional wisdom as found in Hacker News. This is what we found:

There are about 20,000 cub scout packs and troops in the US and that number is shrinking. Right at the beginning there is a limited customer base. If we assume a maximum 10% market penetration over the course of a few years we end up with 2,000 customers.

I didn’t really want to do this myself so I wanted to hire someone, this means a decent developer and tech support person. With a part time support position and full time developer the initial yearly cost would be $80k. I

In order to eventually enter the black within a few years, I would have to charge $10/mo per pack, not per user. I didn’t really see the model working on a per user basis. Nor did I want the uncertainty of ad revenue. Basecamp charges $25/mo for the minimal package, so $10/mo seemed reasonable.

Brenton and I met with a representative from the scouts to figure out whether this was a viable idea. You can read his post for details on that experience.

In my opinion the biggest hurdle is that the scouts is a volunteer run and volunteer funded organization. I wasn’t out to make a living on this project, but I certainly didn’t want to lose any money either. There were significantly cheaper solutions out there. None of them do much of anything particularly well, but they do it cheap enough and
tolerably well enough. Which is what matters.

As Brenton said, for the price of a coffee we found out that the project wasn’t worth it. There are reasons why blue oceans are blue.