Learning Elm

After trying to learn Clojure and ClojureScript, I got to the point where I was able to make my map app and load it with data. However, loading the initial map of California took 10 seconds, and loading up and coloring traffic volume tiles on the map took another 5. Super slow.

This is not a knock against ClojureScript, etc, but rather a knock on my understanding and abilities in said language. I suck at clojure.

So I switched to Elm.

Here is my first trial with Elm, doing the letterfall thing.


Transfer prints

It is possible to make transfer prints from ink jet print outs. Professor Gerald R. Van Hecke was absolutely correct when he said, back in 1989, that I should use my knowledge of chemistry rather than saying lighter fluid “magically” lifts off the images from magazines. Had I listened, I might have been open to other methods.

Apparently, the techniques are all dependent upon chemistry—something needs to attack the bonds between the ink particles and the paper. In my old use of Zippo fluid and magazines, the lighter fluid did the trick. With Polaroid type 669 transfers, in the one case the ink hasn’t yet transferred to the photo paper, and in the emulsion transfer technique, the hot water dissolves the bond between the emulsion and the paper. These new-to-me techniques (it seems most articles on the internet were from 2011 through 2013, with nothing much new happening that I can find) some substance is used to lift the ink.

A good series of articles is here, a long article covering lots of different lifting media is here, and some all-in-one PDFs are here for gel printing and here for direct transfers. This last recipe is one of many approaches that print to non-porous surfaces (cheap plastic overheads; glossy backing to printable stickers; etc.) and then slap that surface down on the receiving surface before the ink has had much chance to dry.

So next weekend’s project is lined up I guess.

How to use npm to conditionally download a file

I am working on a package to process OpenStreetMap data, cleaning up some older code. My old REAMDE used to say "download the file from…", but sometimes people have trouble with that step. What I wanted to do was to automate the process, to check if the downloaded OSM file was older than some period (say 30 days), and if so, to download a new snapshot. I also wanted to use npm, because then it would cooperate nicely with all my other crazy uses of npm, such as building and packaging R scripts. Because I couldn’t find any exact recipes on the internet, here’s how I did it.

First, check out how to use npm as a build tool and the more recent why npm scripts. Both of these posts are excellent introductions to using npm scripts.

For my problem, there are two basic tasks I need to solve with npm scripts. First, I need to be able to check the age of a file, and second I need to be able to download a file. Note that because I only run Linux, I’m not even going to pretend that my solution is portable. Mac OSX users can probably use similar commands, but Windows users are likely going to have to change things around a bit. With that Linux-centric caveat aside, here is how I solved this problem.

File age

To determine if a file is too old I can use find.

find . -name "thefilename" -mtime +30

This will find a file called "thefilename" if it is older than 30 days (more or less…there is some gray area about how fractional days get counted). Rather than using this as an if statement, it’s probably easier to just use the built-in "-delete" operator in find to remove any file older than 30 days.

Download a file

To download a file, I can use curl. Specifically, I want to download the California latest file from geofabrik, so I would use

curl http://download.geofabrik.de/north-america/us/california-latest.osm.pbf > california-latest.osm.pbf

Fitting into run scripts, mistakes and all

Delete the old file

My idea was to use the find command to delete a file that is older than my desired age, and then to use the curl command to download a file if and only if it doesn’t yet exist.

First, I codified the delete operation into a run script as follows:

"build:pbfclean":"find ./binaries/osm -name california-latest.osm.pbf -mtime +30 -delete"

Running that failed spectacularly!

james@emma osm_load[master]$ npm run build:pbfclean

> osm_load@1.0.0 build:pbfclean /home/james/repos/jem/calvad/sqitch_packages/osm_load
> find ./binaries/osm -name california-latest.osm.pbf -mtime +30 -delete

find: `./binaries': No such file or directory

npm ERR! Linux 4.4.10
npm ERR! argv "/usr/local/bin/node" "/usr/local/bin/npm" "run" "build:pbfclean"
npm ERR! node v6.2.0
npm ERR! npm  v3.8.9
npm ERR! osm_load@1.0.0 build:pbfclean: `find ./binaries/osm -name california-latest.osm.pbf -mtime +30 -delete`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the osm_load@1.0.0 build:pbfclean script 'find ./binaries/osm -name california-latest.osm.pbf -mtime +30 -delete'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the osm_load package,
npm ERR! not with npm itself.

The problem is that find failed because I hadn’t created the destination directory yet. I don’t really want to create a directory just to empty it, so instead I tried running a test first.

So I extended the script a little bit:

"build:pbfclean":"test -d binaries && test -d osm && find ./binaries/osm -name california-latest.osm.pbf -mtime +30 -delete"

This was another crashing failure:

james@emma osm_load[master]$ npm run build:pbfclean

> osm_load@1.0.0 build:pbfclean /home/james/repos/jem/calvad/sqitch_packages/osm_load
> test -d binaries && test -d osm && find ./binaries/osm -name california-latest.osm.pbf -mtime +30 -delete

npm ERR! Linux 4.4.10
npm ERR! argv "/usr/local/bin/node" "/usr/local/bin/npm" "run" "build:pbfclean"
npm ERR! node v6.2.0
npm ERR! npm  v3.8.9
npm ERR! osm_load@1.0.0 build:pbfclean: `test -d binaries && test -d osm && find ./binaries/osm -name california-latest.osm.pbf -mtime +30 -delete`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the osm_load@1.0.0 build:pbfclean script 'test -d binaries && test -d osm && find ./binaries/osm -name california-latest.osm.pbf -mtime +30 -delete'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the osm_load package,

The problem here is that the test -d binaries was doing its job, but was exiting with a non-zero condition. Reading the docs (npm help scripts) shows that non-zero exit is interpreted as a problem:


  • Don’t exit with a non-zero error code unless you really mean it. Except for uninstall scripts, this will cause the npm action to fail, and potentially be rolled back. If the failure is minor or only will prevent some optional features, then it’s better to just print a warning and exit successfully.

So test is the wrong tool to use here, so I switched to if;then;fi:

"build:pbfclean":"if [ -d binaries -a -d binaries/osm ]; then find ./binaries/osm -name california-latest.osm.pbf -mtime +30 -delete; fi",

And the results are better:

james@emma osm_load[master]$ npm run build:pbfclean

> osm_load@1.0.0 build:pbfclean /home/james/repos/jem/calvad/sqitch_packages/osm_load
> if [ -d binaries -a -d binaries/osm ]; then find ./binaries/osm -name california-latest.osm.pbf -mtime +30 -delete; fi

Unfortunately, while that doesn’t crash, I also want to check that it works to delete a file older than 30 days. So I made the directory in question, grabbed any old file older than 30 days, copied it into place and renamed it california-latest.osm.pbf:

find ~ -maxdepth 1 -mtime +30


james@emma osm_load[master]$ ls -lrt ~/3.7.10.generic.config
-rw-r--r-- 1 james users 129512 Sep 27  2014 /home/james/3.7.10.generic.config
james@emma osm_load[master]$ mkdir binaries/osm -p
james@emma osm_load[master]$ rsync -a ~/3.7.10.generic.config binaries/osm/california-latest.osm.pbf
james@emma osm_load[master]$ ls -lrt binaries/osm/
total 128
-rw-r--r-- 1 james users 129512 Sep 27  2014 california-latest.osm.pbf
james@emma osm_load[master]$  find ./binaries/osm -name california-latest.osm.pbf -mtime +30

Now running my build:pbfclean should delete that file:

james@emma osm_load[master]$ npm run build:pbfclean

> osm_load@1.0.0 build:pbfclean /home/james/repos/jem/calvad/sqitch_packages/osm_load
> if [ -d binaries -a -d binaries/osm ]; then find ./binaries/osm -name california-latest.osm.pbf -mtime +30 -delete; fi

james@emma osm_load[master]$ ls -lrt binaries/osm/
total 0


Download a new file

To download a new file I need to run a simple curl command, but I also need to do two other things first. I need to make sure first that the destination directory is there, and second that the file does not already exist.

To make sure the destination directory exists, all I have to do is run mkdir -p. Alternately, I could check if the directories exist, and then run mkdir -p if they don’t, but that seems excessive for a simple two level path.

"build:pbfdir":"mkdir -p binaries/osm",

To test if the file exists already (and so to skip the download), I used if; then; fi again (having already been burned by test) as follows:

"build:pbfget":"if [ ! -e binaries/osm/california-latest.osm.pbf ]; then curl http://download.geofabrik.de/north-america/us/california-latest.osm.pbf -o binaries/osm/california-latest.osm.pbf; fi "

Here the -e option checks if the file exists, and if it does not (the ! modifier before the -e) then it will run the curl download. If the file does exist, then nothing will happen.

Putting them together, I first call the build:pbfdir script, and then do the curl download check and execute:

"build:pbfdir":"mkdir -p binaries/osm",
"build:pbfget":"npm run build:pbfdir -s && if [ ! -e binaries/osm/california-latest.osm.pbf ]; then curl http://download.geofabrik.de/north-america/us/california-latest.osm.pbf -o binaries/osm/california-latest.osm.pbf; fi "

(I couldn’t find the -s option documented anywhere in the npm docs, but apparently it suppresses output.)

It works fine:

james@emma osm_load[master]$ npm run build:pbfget

> osm_load@1.0.0 build:pbfget /home/james/repos/jem/calvad/sqitch_packages/osm_load
> npm run build:pbfdir -s && if [ ! -e binaries/osm/california-latest.osm.pbf ]; then curl http://download.geofabrik.de/north-america/us/california-latest.osm.pbf -o binaries/osm/california-latest.osm.pbf; fi

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0  578M    0  1194    0     0   1329      0   5d 06h --:--:--   5d 06h  1669^C

Of course, I could have just slotted the mkdir -p command inside of the build:pbfget command, but this is a better example of how to cascade two run scripts. And besides, maybe in the future I will be using a symbolic link pointing into a big disk, and so mkdir -p would be inappropriate.

The final scripts portion of my package looks like this:

  "name": "osm_load",
  "version": "1.0.0",
  "description": "Load OSM data (California) into a local database",
  "main": "load.js",
  "scripts": {
      "test": "tap test/*.js",
      "build": "npm run build:pbfclean  && npm run build:pbfget",
      "build:pbfclean":"if [ -d binaries -a -d binaries/osm ]; then find ./binaries/osm -name california-latest.osm.pbf -mtime +30 -delete; fi",
      "build:pbfdir":"mkdir -p binaries/osm",
      "build:pbfget":"npm run build:pbfdir -s && if [ ! -e binaries/osm/california-latest.osm.pbf ]; then curl http://download.geofabrik.de/north-america/us/california-latest.osm.pbf -o binaries/osm/california-latest.osm.pbf; fi "

An example of using sqitch with cross-project dependencies

File this as yet another post of something I couldn’t find when searching the internet. I recently started using sqitch and despite the horrible spelling (dude, my brain always puts ‘u’ after ‘q’; not cool), the tool is incredibly helpful to bring order to my chaotic database mis-management practices.

This post isn’t about sqitch itself—there are lots of tutorials available for that—but rather about using sqitch to manage dependencies across projects. When learning sqitch I made a big project and dumped all of the deploy/revert/verify/test rules for each table/schema/function I needed for part of a database. Now that I have a little bit of a clue, I’m moving towards smaller, do-one-thing packages. But to enable that, I had to figure out how to enable cross-project dependencies in sqitch.

There isn’t an example that I could find anywhere, so I just hacked on the sqitch.plan file until things worked.

This is the original plan from the monolithic project. This snippet first adds a schema, then a counties table, then a city abbreviations table. The cities are located inside counties, and there are links between the two tables, so the cities sql needs to depend on the counties, and both need to depend on the schema.


appschema 2016-02-02T20:56:11Z James E. Marca <james@example.com> # Add schema for geocoding work.
counties_fips [appschema] 2016-02-02T23:23:13Z James E. Marca <james@example.com> # Add counties_fips table.
city_abbrevs [appschema counties_fips] 2016-02-04T17:57:02Z James E. Marca <james@example.com> # Add city abbreviations.

Splitting this into three projects, one for the schema, one for counties, and one for cities:

First, after initializing and adding for the geocode_schema package, the sqitch.plan looks like:


geocode_schema 2016-03-16T16:30:54Z James E. Marca <james@example.com> # add schema for geocoding

Now when creating the county package, when I add the new sql I have to declare its dependency on the geocoding schema package:

sqitch add county_fips --requires calvad_db_geocode_schema:geocode_schema -n 'county_fips table'

Unlike in the sqitch tutorials, here the project-specific dependency has the form “project_name:change_name” instead of just “change_name”.

This creates the plan file something like the following:


county_fips [calvad_db_geocode_schema:geocode_schema] 2016-03-16T17:02:35Z James E. Marca <james@example.com> # county_fips table

As usual, after slotting in a non-trivial deploy/county_fips.sql, etc, if I simply attempt to deploy this new addition to a database it will fail.

james@emma calvad_db_county[testingsqitch]$ sqitch deploy db:pg:sqitchtesting
Adding registry tables to db:pg:sqitchtesting
Deploying changes to db:pg:sqitchtesting
Missing required change: calvad_db_geocode_schema:geocode_schema

So the acid test: deploy the calvad_db_geocode_schema:geocode_schema project:

james@emma calvad_db_county[testingsqitch]$ cd ../calvad_db_geocode_schema/           
james@emma calvad_db_geocode_schema$ sqitch deploy --verify db:pg:sqitchtesting
Deploying changes to db:pg:sqitchtesting
  + geocode_schema .. ok
james@emma calvad_db_geocode_schema$ cd ../calvad_db_county/                   
james@emma calvad_db_county[testingsqitch]$ sqitch deploy --verify db:pg:sqitchtesting
Deploying changes to db:pg:sqitchtesting
  + county_fips .. ok

It worked!

Popping into the database and looking at the sqitch tables is also instructive:

psql (9.4.5)
Type "help" for help.

sqitchtesting=# \dt sqitch.
           List of relations
 Schema |     Name     | Type  | Owner 
 sqitch | changes      | table | james
 sqitch | dependencies | table | james
 sqitch | events       | table | james
 sqitch | projects     | table | james
 sqitch | releases     | table | james
 sqitch | tags         | table | james
(6 rows)

sqitchtesting=# select change_id,change,project,note from sqitch.changes ;
                change_id                 |     change     |         project          |           note           
 f0964df0e223700ad34d9bd50bd48a8cde14d0f5 | geocode_schema | calvad_db_geocode_schema | add schema for geocoding
 e4f6cae819e3c6753518dac9c4922c18853f6d88 | county_fips    | calvad_db_county         | county_fips table
(2 rows)

sqitchtesting=# \d sqitch.projects
                            Table "sqitch.projects"
    Column     |           Type           |             Modifiers              
 project       | text                     | not null
 uri           | text                     | 
 created_at    | timestamp with time zone | not null default clock_timestamp()
 creator_name  | text                     | not null
 creator_email | text                     | not null
    "projects_pkey" PRIMARY KEY, btree (project)
    "projects_uri_key" UNIQUE CONSTRAINT, btree (uri)
Referenced by:
    TABLE "sqitch.changes" CONSTRAINT "changes_project_fkey" FOREIGN KEY (project) REFERENCES sqitch.projects(project) ON UPDATE CASCADE
    TABLE "sqitch.events" CONSTRAINT "events_project_fkey" FOREIGN KEY (project) REFERENCES sqitch.projects(project) ON UPDATE CASCADE
    TABLE "sqitch.tags" CONSTRAINT "tags_project_fkey" FOREIGN KEY (project) REFERENCES sqitch.projects(project) ON UPDATE CASCADE

sqitchtesting=# select * from sqitch.projects;
         project          |                           uri                     |          created_at           |  creator_name  |      creator_email      
 calvad_db_county         | git@example.com/a/jmarca/calvad_db_county         | 2016-03-16 10:38:05.402549-07 | James E. Marca | james@example.com
 calvad_db_geocode_schema | git@example.com/a/jmarca/calvad_db_geocode_schema | 2016-03-16 10:38:14.244119-07 | James E. Marca | james@example.com
(2 rows)

Actually I don’t like the design of the projects table at all. In my opinion, the unique key for projects should be the URI, not the project name. That quibble aside, it is clear that sqitch can indeed use dependencies that are defined in external projects.

Now the next step for me is to wire this up inside of npm to make npm install pull down sqitch dependencies from the sqitch URI and then deploy/verify them, so that the package is ready for its own deploy/verify/test dance.

Dump a doc from CouchDB with attachments

In order to dummy up a test in node.js, I need data to populate a testing CouchDB database. Specifically, I am testing some code that creates statistics plots (in R) and then saves them to a doc as attachments. So for my tests, I need at least one document with its PNG attachments already in place.

I couldn’t find a simple “howto” for this on the Internet, so here’s a note to my future self.

First of all, the CouchDB docs are great, and curl is your friend. Curl lets you set the headers. In this case, I don’t want HTML to come back, I want a valid JSON document, so (in typical belt-and-suspenders style) I specify both the content type and the accept header parameters to be application/json as follows:

curl -H 'Content-Type: application/json' \
-H 'Accept: application/json' \> 801447.json

The returned document has encoded the binary PNG files as JSON fields, in accordance with the CouchDB specs:

  {"name":"SERFAS CLUB",

Lovely binary-to-hex, looking good.

To verify that the returned document is actually valid json, I use the command line some more (and I’m not sure which Linux library installed this, but there are several JSON pretty printers and verifiers out there):

james@emma files[bug/fixplots]$ json_verify< 801451.json

JSON is valid

Then to use the document in my test, all I have to do is read it in and send it off:

function put_json_file(file,couchurl,cb){
    var db_dump = require(file) // in node you can require json too!
        return cb(e)
    return null

To see that in action, I put my various CouchDB-related utilities in a file here, and then my actual test has a before job that creates the CouchDB database and populates it, and a corresponding after task that deletes the temporary database.

The choice between hard-coding and parameterization

I am revising some older code, and once again facing the choice of whether to hard-code database names and database tables in my code. Ordinarily I’d say no way, that’s stupid. But in this case, it might not be.

This post has zero pat answers, and is really just me thinking about things now so that my future self can revisit my thoughts and reevaluate my conclusions. However, I haven’t written a post in a while, and there is a slight chance someone else might find this useful in some way. Also, I’ve been chosen to present at this year’s PGConf US, and my talk is about testing SQL, so this is somewhat relevant to that.

So the code. First of all, a while back I wrote a general purpose query generator to grab shape data from PostgreSQL/PostGIS. See https://github.com/jmarca/shapes_postgis. That library allows me to do some fairly complicated queries from my express-based server through liberal use of parameterization. For example, in my tests, I have a handler defined as:

var app = express()
var vds_options=_.assign({
    ,'select_properties':{'tvd.freeway_id' : 'freeway'
                          ,'tvd.freeway_dir': 'direction'
                          ,"'vdsid_' || id"   : 'detector_id'
                          ,'vdstype'        : 'type'
    ,'username' : config.postgresql.auth.username
    ,'password' : config.postgresql.auth.password

var vdsservice = shape_service(vds_options)

var server=http

This is cool because the query coming in from the client can specify which type of vds to return. For example (again pulling from the tests), the query can be:

request({url:'http://'+ testhost +':'+_testport+'/points/10/174/407.json?vdstype=\'ff\''
            if(e) return done(e)

By adding vdstype='ff' in my http query, the generated SQL query will limit the result to those entries that only match “ff”. The generated SQL is:

with bounding_area as (
    select geom4326 as geom
    from public.carb_airdistricts_aligned_03
    where dis='SC'
SELECT tvd.freeway_id as freeway,
       tvd.freeway_dir as direction,
       'vdsid_' || id as detector_id,
       vdstype as type,
            )).geom,0.1),1) as geojson
FROM newtbmap.tvd as tvd
JOIN bounding_area ON (st_intersects(tvd.geom,bounding_area.geom))
WHERE vdstype~*'ff'

Continue reading


stupid patents

Okay, Google just patented automated delivery vehicles. Dumb. Car with a lock on it. Not hard, super obvious. US009256852

And to paraphrase Mr. Bumble, “If the law supposes [that this kind of invention is patentable before we even have widespread use of driverless cars], then the law [(and Google)] is a ass—a idiot.”
Continue reading

a pushpin icon with color 6ee300

CouchDB 2.0 preview day 2

Yesterday I fired up CouchDB 2.0 (well, the lastest git master). Today I wanted to start using it and right away ran into a difference between the old way and the new way.

My test application needs CORS to be enabled. The old way one could fiddle with the config files directly, or fiddle with the config files in futon, or use the handy command line tool from the PouchDB project at https://github.com/pouchdb/add-cors-to-couchdb.

But CouchDB 2.0 by default spawns three nodes, not just one. Therefore fauxton prevents the root user from manipulating the configuration of CouchDB directly, and instead suggests that this task be performed by “a configuration management tools like Chef, Ansible, Puppet or Salt (in no particular order).”

Configuration via configuration management tool

Configuration via configuration management tool

Because the 2.0 release isn’t really done yet, there isn’t much support available in the documentation. I couldn’t find any mention of how to use “Chef, Ansible, Puppet, or Salt” and since I’ve never used them before, I’m not going to get involved for such a simple task.

Instead, I decided to go the manual route, and try to fiddle directly with the config files for each node. In my couchdb directory, I am running the server from the ./dev/ subdirectory. Looking there, I found the following directory tree:

james@emma couchdb[master]$ tree -d dev
├── data
├── lib
│   ├── node1
│   │   ├── data
│   │   │   └── shards
│   │   │       ├── 00000000-1fffffff
│   │   │       ├── 20000000-3fffffff
│   │   │       ├── 40000000-5fffffff
│   │   │       ├── 60000000-7fffffff
│   │   │       ├── 80000000-9fffffff
│   │   │       ├── a0000000-bfffffff
│   │   │       ├── c0000000-dfffffff
│   │   │       └── e0000000-ffffffff
│   │   └── etc
│   ├── node2
│   │   ├── data
│   │   │   └── shards
│   │   │       ├── 00000000-1fffffff
│   │   │       ├── 20000000-3fffffff
│   │   │       ├── 40000000-5fffffff
│   │   │       ├── 60000000-7fffffff
│   │   │       ├── 80000000-9fffffff
│   │   │       ├── a0000000-bfffffff
│   │   │       ├── c0000000-dfffffff
│   │   │       └── e0000000-ffffffff
│   │   └── etc
│   └── node3
│       ├── data
│       │   └── shards
│       │       ├── 00000000-1fffffff
│       │       ├── 20000000-3fffffff
│       │       ├── 40000000-5fffffff
│       │       ├── 60000000-7fffffff
│       │       ├── 80000000-9fffffff
│       │       ├── a0000000-bfffffff
│       │       ├── c0000000-dfffffff
│       │       └── e0000000-ffffffff
│       └── etc
└── logs

39 directories

Clearly, there are three nodes, and each has an etc subdirectory. And find turns up what I’m looking for right where I think it should be:

james@emma couchdb[master]$ find dev -name local.ini

So I loaded each local.ini in turn into emacs and turned on CORS in each

;port = 5984
;bind_address =
enable_cors = true
credentials = false
; List of origins separated by a comma, * means accept all
; Origins must include the scheme: http://example.com
; You can’t set origins: * and credentials = true at the same time.
origins = *

You cant just copy node1’s local.ini to all three nodes, because each file contains the node’s UUID. Duplicate (or triplicate!) UUIDs is a little stupid…even I know that.

I restarted the three nodes using dev/run, and then for good measure I downloaded haproxy from SlackBuilds, built it, installed it, then ran

&lt;br /&gt; /usr/sbin/haproxy -f rel/haproxy.cfg
[WARNING] 279/120801 (18768) : config : log format ignored for frontend 'http-in' since it has no log address.
[WARNING] 279/120801 (18768) : Health check for server couchdbs/couchdb1 succeeded, reason: Layer4 check passed, check duration: 0ms, status: 3/3 UP.
[WARNING] 279/120803 (18768) : Health check for server couchdbs/couchdb2 succeeded, reason: Layer4 check passed, check duration: 0ms, status: 3/3 UP.
[WARNING] 279/120805 (18768) : Health check for server couchdbs/couchdb3 succeeded, reason: Layer4 check passed, check duration: 0ms, status: 3/3 UP.

I had to switch the port from 5984 to 5985 in rel/haproxy.cfg because I’m currently running 1.6.x CouchDB on 5984, but the proxy worked. I was also able to ping the proxy from a different machine, because it listens to *, not

james@emma couchdb[master]$ ssh 
Last login: Wed Oct  7 12:14:03 2015 from
james@kitty ~$ curl
{&quot;couchdb&quot;:&quot;Welcome&quot;,&quot;version&quot;:&quot;4ca9e41&quot;,&quot;vendor&quot;:{&quot;name&quot;:&quot;The Apache Software Foundation&quot;}}

I haven’t actually tested whether or not I’ve set up CORS properly. That’s for my next post I guess.

Upgrade CouchDB to 2.0.0 preview/master branch

I was inspired today to try couchdb master, which is more or less the
2.0 preview. I ran into a minor problem that didn’t seem to be
documented anywhere.

I have a repo that I’ve been using to track the 1.6.x patches, and I
just pulled to that, checked out master, and tried to configure.

git pull
git checkout master

The configure process started to download a lot of stuff using git,
then crashed with a mysterious complaint about an app dir and an app
file missing.

Is it Erlang?

Since I’m on slackware, I tend to compile everything that isn’t
standard Slackware, and the standard SlackBuild for Erlang these days
is 17.4. I know I’ve had trouble with that in the past, so I took a
look at the INSTALL file and then the git logs and saw that the
maximum Erlang mentioned is 17.0. So I downloaded 17.0, compiled it,
and replaced 17.4 with 17.0.

Same problem. ./configure ran much faster, but failed with the same

Is it just me?

I started to get discouraged, feeling like perhaps CouchDB wasn’t
going to let me relax any more. Because the error was in the sub
projects, I poked around the configure and Makefile files, and
didn’t see a way to force the clean checkout. So I just deleted the
problem directory (./src/couch_index) and ran configure again.
Again it crashed, but this time on a different file.

Because I trust git and because it isn’t my project, I just deleted
all of the directories under ./src/ and did a git status. Git
said that all was okay, so clearly none of the stuff under ./src was
under version control.

Rerunning ./configure this time checked out all of the projects, and
completed successfully.

Sadly, at the end of the configure step, I read the words:

Updating lager from {git,"https://git-wip-us.apache.org/repos/asf/couchdb-lager.git",
Updating bear from {git,"https://git-wip-us.apache.org/repos/asf/couchdb-bear.git",
james@kitty couchdb[master]$

Gone is the admonition

You have configured Apache CouchDB, time to relax.


I went ahead and restored Erlang to 17.4, re-ran the configure step,
then ran make. Everything ran smoothly, aside from a minor hiccup
requiring me to run sudo pip install sphinx then make again.


I didn’t want to install the new CouchDB, but rather just wanted to
play with it. Reading from https://couchdb.apache.org/developer-preview/2.0/,
I executed dev/run from the command line after the make completed
successfully. After it fired up the three nodes of the CouchDB 2.0
service (yay, 3 nodes out of the box!), I noted the root user and
password, and hopped over to my browser to
The new Fauxton popped up, I logged in, with the root username and password,
and poked around the empty CouchDB.

Of course, not much there, but so it goes.

I haven’t had the guts to try cloning any of my old databases from
CouchDB 1.6.x (that’s for some other day). Instead I satisfied myself
with making a new, non-root user.

Unlike the old version of Futon, there isn’t an obvious place in
Fauxton to add a new user. I also found that the 2.0 docs aren’t
super complete, so I was curious if the old, curl-based method of
adding users (documented here) would work.

I ran the following command:

curl -X PUT http://localhost:15984/_users/org.couchdb.user:james \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-d '{"name": "james", "password": "pluot caramelized muffin breakfast", "roles": [], "type": "user"}'

Curl reported success, and I poked the _users database in fauxton
and saw my new user, with the password hashed properly, of course.  Now I can
log in as “james” rather than “root”.

So the upgrade to 2.0 developer preview is a success. Next I have to
actually test out all the new features.

Musing on summer tarts and cobblers

Two weeks ago I made a blueberry and nectarine cobbler, more or less sticking to the recipe from Thomas Keller’s book Ad Hoc at Home. My only variation was that I added nectarines too, not just blueberries. It was terrible; in my opinion the worst fruit cobbler I’ve ever made. The “cobbler” part became a gross, soggy layer of cake-like stuff on top of a too-thin layer of fruit. On the one hand, perhaps my pan was too big and the fruit spread out too much, but on the other hand, if the pan was too big, why did the topping (which was supposed to come out like individual dumplings) glob together into a single surface? Sucky recipe, bad quality control on the cookbook authors’ part, thereby reinforcing my dislike of celebrity chefs and their vanity cookbook projects.

Anyway, that disaster got me thinking about making another blueberry and nectarine cobbler. While I usually go for b&n pie with a proper crust, the time constraints of yesterday’s dinner party precluded putting in the time to make the crust. And Brooke wanted a cobbler.

So I started thinking what would make a good cobbler topping, and I remembered the success I had a long time ago making the caramel topping on pecan rolls. The basic idea is to press half a stick of butter into a cake pan, then layer on a cup or so of brown sugar. As the pecan rolls bake in the oven, the butter and sugar turn into caramel and infuse the pecan rolls with sticky goodness.

So I raided the fridge for some butter and discovered (horrors) that all I had left was a little blob of unsalted butter. But I also spied some clarified butter in a little container. Good enough, so I mixed the two and pressed them into the bottom of my cake pan. Being the good cook that I am, I licked the butter off my fingers—and discovered that the clarified butter wasn’t clarified butter, but rather left over butter-sage sauce!

It’s a funny thing but I am actually a pretty good taster of food (although I am not a very good taster of wine) (or else maybe I just drink a lot of swill) (but I digress). As I tasted the butter, I definitely tasted the sage, and I decided I was okay with that, but I could also taste a hint of garlic, and I was not okay with that. Since I had just crushed and chopped garlic for the sizzling shrimp I was going to make, I really had to think about whether it was my tongue tasting the garlic or my nose smelling it, and that gave me time to think about how the sage would work with the fruit.

I decided the garlic really was in the butter, and it had to go (actually I just added it to the oil I was going to use for the shrimp), and I grabbed a fresh stick of, sadly, salted butter. But I also decided that I really wanted the sage, so I trucked out to the garden to grab some sage leaves. My sage plant from several years got uprooted and didn’t survive this spring’s planting, so all I have is a variegated sage plant with lots of very small leaves. Still good, but I wanted the visual of the leaves, not just the flavor. Then I saw the lemon verbena plant we have growing next to the sage that we intended to use for tea but instead just let it grow. I remember Emma made some fruit dessert once—poached peaches I think—with lemon verbena in the sugar syrup, so I grabbed about 10 nice looking leaves along with the sage.

After washing all the leaves, I placed them in a sunburst pattern on top of the brown sugar I had pressed into the thick layer of butter. Then I added about a quarter of Julia Child’s apple crumble recipe crumble on top of the leaves so that I couldn’t see them any more, and then I tumbled alternating layers of nectarines and blueberries on top of that. Finally, when the fruit was about to the top of the cake pan, I topped it with the rest of the crumble topping (one cup oats, half cup flour, 6 tbsp butter, pinch of salt, 3/4 cup brown sugar, bzapper in the cuisinart to mix) and pressed it down firmly to make a solid layer of sugar-butter-oats.

My idea was to bake it for about an hour at 350 until I could see the caramel bubbling up the sides, and until I could see the fruit begin to bubble through the topping. Then I was going to flip the whole mess onto a big plate, so that the caramel and leaves ended up on top, and the crumble ended up on the bottom like a tart crust.

The results were visually disastrous, but the flavors were great. The few sage leaves really spiked the sugars and flavors of the fruit, and the lemon verbena added a hint of “mystery flavor” that is always fun in a dessert. The crumble crust didn’t add much for me, however, and I don’t think I’ll do that quite like that again.

Unfortunately, I used a completely wrong pan for the cake pan. I actually used a removable bottom pan, which was pretty stupid because a lot of the caramel seeped out onto the baking sheet (I’m not that stupid) rather than bubbling up the sides. And after I flipped the whole thing onto the serving platter, I realized this was just like a tarte tatin, and I could have made it in a cast iron skillet with a pie crust bottom.

So I’m going to make this again, but this time:

  1. use a cast iron pan
  2. maybe put the lemon verbena and sage leaves down first, then the butter, then the sugar, so that the leave show
  3. perhaps a graham cracker crust on top, so it holds together a bit more than the crumble, and gives a bit more crunch
  4. or else perhaps a puff pastry topping that becomes the bottoming, because how cool is it to have crispy puff pastry at the bottom of a oozy drippy fruit tart?

The best part about this dessert was its reception. I had a small serving and really liked the flavor, which is rare for me (I usually just eat my cooking rather than enjoy the flavors). After the first round there was about half the dessert still left on the plate. I mentioned that it looked like we hadn’t really made a dent in the dessert, and suddenly all the adults said they’d like more. In this day and age of low carbs and healthy eating, that’s a resounding success. Finally, when we were cleaning up, there was a very small serving left. I said—hah, we almost finished it!, whereupon Marc asked for a fork and finished it off right from the serving platter. A dessert that is all gone the night it was served is the best kind of dessert, in my opinion.

But while the flavors were great, there is room for improvement, and I have inspiration for more tarts and crumbles.