More with the GDAL/OGR perl bindings

So my last post talked about my struggles to finally get something saved in the database using the native perl bindings into the GDAL/OGR library. Once I got that working and pushed out the post, I immediately started loading up multiple files and playing around with the data. One thing I noticed was that it was impossible to separate different “trips” within the data without playing around with space and time. What I wanted was an easy way to flag each batch of points with a field identifying the run.

The auto-generated schema for the GPX data looks like this:

d testogr.track_points
                                              Table "testogr.track_points"
       Column       |           Type           |                               Modifiers                                
--------------------+--------------------------+------------------------------------------------------------------------
 ogc_fid            | integer                  | not null default nextval('testogr.track_points_ogc_fid_seq'::regclass)
 wkb_geometry       | geometry(Point,4326)     | 
 track_fid          | integer                  | 
 track_seg_id       | integer                  | 
 track_seg_point_id | integer                  | 
 ele                | double precision         | 
 time               | timestamp with time zone | 
 magvar             | double precision         | 
 geoidheight        | double precision         | 
 name               | character varying        | 
 cmt                | character varying        | 
 desc               | character varying        | 
 src                | character varying        | 
 link1_href         | character varying        | 
 link1_text         | character varying        | 
 link1_type         | character varying        | 
 link2_href         | character varying        | 
 link2_text         | character varying        | 
 link2_type         | character varying        | 
 sym                | character varying        | 
 type               | character varying        | 
 fix                | character varying        | 
 sat                | integer                  | 
 hdop               | double precision         | 
 vdop               | double precision         | 
 pdop               | double precision         | 
 ageofdgpsdata      | double precision         | 
 dgpsid             | integer                  | 
 speed              | double precision         | 
Indexes:
    "track_points_pkey" PRIMARY KEY, btree (ogc_fid)
    "track_points_wkb_geometry_geom_idx" gist (wkb_geometry)

There are three fields that are completely blank: src, desc, and name. I decided to use src to identify the source of the data as the file name it came from.

First I modified my previous program to parse the command line options using Getopt::Long. I don’t use all of its power in this example, but in the past I’ve been well served by starting with that in case the script grows and mutates.

With Getopt::Long, I understand there are ways to input a list of things into the arguments. You can have multiple invocations of the same option, for example, --file mydata.gpx --file moredata.gpx, or you can input them as a comma separated list and follow the recipe in the perldoc for the module. However, I wanted to use a glob, like –file data/*.gpx, so I instead decided to just stick all the files after a double dash on the command line. So really, in the following code, I’m only using Getopt::Long to parse out a –help command! However, it’s there if I need to expand functionality in the future.

use strict;
use warnings;
use Carp;

use Geo::GDAL;
use Data::Dumper;

use Getopt::Long;
use Pod::Usage;

my $man = 0;
my $help = 0;

my @files;

my $result = GetOptions(
    'help|?' => $help,
    ) or pod2usage(2);

pod2usage(-exitval => 0, -verbose => 2) if $help;

@files = @ARGV;
...

With that, I have all of my input files in an array, and I can loop over them and store the filename in the source field in the db by using $new_feature->SetField('src',$_);, as follows:

foreach (@files){

    my $ds = Geo::OGR::Open($_);

    my $layer         = $ds->Layer($layer_name);
    my $feature_count = $layer->GetFeatureCount();
    carp "$layer_name, $feature_count";
    if ( $feature_count < 10 ) {
        next;
    }

    carp "saving $_ to pg";

    # now append each feature
    my $x = 0;
    $pg_layer->StartTransaction();
    while ( my $feature = $layer->GetNextFeature() ) {

        my $new_feature = Geo::OGR::Feature->new($defn);
        $new_feature->SetFrom($feature);

        # write the filename as the src field, for making lines later
        $new_feature->SetField('src',$_);

        my $pgf = $pg_layer->CreateFeature($new_feature);

        $x += 1;
        if ( $x % 128 == 0 ) {
            carp $x;
            # uncomment the following to crash your program
            # $pg_layer->CommitTransaction();
            # StartTransaction() seems to auto commit prior transaction?
            $pg_layer->StartTransaction(); 
            $x = 0;
        }

    }
    if ($x) {
        carp "all done, $x remaining";
        $pg_layer->CommitTransaction(); # this one doesn't crash for some reason
        carp "last transaction committed";
    }
}

That does its magic, and the database now has distinct groups of points. Now if you want to make “lines” out of those points, you can do this in PostGIS:

SELECT ST_MakeLine(wkb_geometry ORDER BY track_seg_point_id ASC) AS linegeom, src
INTO table testogr.lines
FROM testogr.track_points
GROUP BY src;

Et voila

QGIS rendering the new lines table, on top of OSM lines data

QGIS rendering the new lines table, on top of OSM lines data

Of course, that isn’t at all helpful, as I want to see speeds, not just the lines. Next step is to try to figure out how to add a measure to each point, and then collect those (X,Y,M) type points into a line with a measure dimension. I guess that will be my next post.

Using GDAL/OGR perl bindings to load GPX files into PostgreSQL/PostGIS

Today I wrote a short perl program to import GPX files into PostgreSQL using the OGR library’s native perl bindings. This was a super pain to figure out because the naive way doesn’t work, and it appears all the documentation pushed out to mailing lists and on various wikis talks about Python.

OGR has an excellent tool called ogr2ogr that allows you to append data. However, I didn’t want to use that because I wanted to fiddle with the data first, the pipe it to SQL. Specifically, I wanted to delete long pauses at stop lights, etc., and I wanted to use some logic to make sure I didn’t blindly reload old GPX files.

My initial solution was to simply copy the GPX layer in, and then hunt around for a way to flip on an “append” option. My initial program looked like:

use strict;
use warnings;
use Carp;

use Geo::GDAL;
use Data::Dumper;

# Establish a connection to a PostGIS database
my $pg = Geo::OGR::GetDriverByName('PostgreSQL');
if ( !$pg ) {
    croak 'PostgreSQL driver not available';
}

my $conn = $pg->Open( "PG:dbname='osm' user='james' schemas=testogr", 1 );

if ( !$conn ) {
    croak 'choked making connection';
}

my $ds = Geo::OGR::Open('../test/2014-07-10_07-29-12.gpx');

my $pg_layer;
my $defn;

## I'm only interested in the track_points layer
my $layer_name = 'track_points';
my $layer      = $ds->Layer($layer_name);

# use copy
$pg_layer = $conn->CopyLayer( $layer, $layer_name, { 'overwrite' => 1 } );
if ( !$pg_layer ) {
    carp 'failed to copy';
}

1;

That works, but curiously the automatic FID doesn’t automatically increment when using CopyLayer. No matter, I don’t actually use that, because I like creating my own table definitions.

And even if that did work properly, it would only work once. Every other time, that “overwrite” option on the CopyLayer command is going to wipe the table.

Poring over the docs, I didn’t see any option for “append” as was used in the ogr2ogr utility. So I combed through the ogr2ogr source code, and discovered that the “-append” option actually causes the code to create each feature and add it to the existing layer inside of a loop by iterating over the each of the fields in the layer:

    if (papszFieldMap && bAppend)
    {
        int bIdentity = FALSE;

        if (EQUAL(papszFieldMap[0], "identity"))
            bIdentity = TRUE;
        else if (CSLCount(papszFieldMap) != nSrcFieldCount)
        {
            fprintf( stderr, "Field map should contain the value 'identity' or "
                    "the same number of integer values as the source field count.n");
            VSIFree(panMap);
            return NULL;
        }

        for( iField=0; iField < nSrcFieldCount; iField++)
        {
            panMap[iField] = bIdentity? iField : atoi(papszFieldMap[iField]);
            if (panMap[iField] >= poDstFDefn->GetFieldCount())
            {
                fprintf( stderr, "Invalid destination field index %d.n", panMap[iField]);
                VSIFree(panMap);
                return NULL;
            }
        }
    }

So I tried something like that, but for some reason I kept failing to be able to add the new feature to the existing PostgreSQL layer. My broken code looked like:

if ( !$append ) {
    $pg_layer = $conn->CopyLayer( $layer, $layer_name );
    if ( !$pg_layer ) {
        carp 'failed to copy';
    }
}
else {
    if ( !$pg_layer ) {

        # try to get the layer from db
        $pg_layer = $conn->GetLayerByName($layer_name);
        $defn     = $pg_layer->GetLayerDefn();
    }

    # now append each feature
    while ( my $feature = $layer->GetNextFeature() ) {

        my $newFeature = Geo::OGR::Feature->new($defn);

        # Add field values from input Layer
        for my $fi ( 0 .. $defn->GetFieldCount() - 1 ) {
            $newFeature->SetField( $defn->GetFieldDefn($fi)->GetNameRef(),
                $feature->GetField($fi) );

            # Set geometry
            $newFeature->SetGeometry( $feature->GetGeometryRef() );
        }

        # THIS BREAKS 
        my $pgf = $pg_layer->InsertFeature($newFeature);

    }
}

And many variations on that theme, including just trying to directly copy in the feature with $pg_layer->InsertFeature($feature).

The unhelpful error read:

RuntimeError Illegal field type value at /usr/local/lib64/perl5/Geo/OGR.pm line 1473.

I hacked out a little instrumentation around Geo/OGR.pm line 1473, but then I found out that the problem “field type value” changed every time, which made me think I was doing something wrong.

Finally, after giving up twice, I stumbled on an old mailing list posting here. Again, it was in Python, but I read Python well enough to translate into perl without problems. With a little bit of hacking around a buggy call to CommitTransaction(), it worked! My final code looks like:

use strict;
use warnings;
use Carp;

use Geo::GDAL;
use Data::Dumper;

# Establish a connection to a PostGIS database
my $pg = Geo::OGR::GetDriverByName('PostgreSQL');
if ( !$pg ) {
    croak 'PostgreSQL driver not available';
}

my $conn = $pg->Open( "PG:dbname='osm' user='james' schemas=testogr", 1 );

if ( !$conn ) {
    croak 'choked making connection';
}

my $ds = Geo::OGR::Open('../test/2014-07-14_17-56-45.gpx');

my $pg_layer;
my $defn;
my $layer_name = 'track_points';

my $layer         = $ds->Layer($layer_name);
my $feature_count = $layer->GetFeatureCount();
carp "$layer_name, $feature_count";
if ( $feature_count < 10 ) {
    croak;
}
carp "saving to pg";
if ( !$pg_layer ) {

    # try to get the layer from db
    $pg_layer = $conn->GetLayerByName( $layer_name, 1 );
    $defn = $pg_layer->GetLayerDefn();
    carp $pg_layer->GetFeatureCount();
}

# now append each feature
my $x = 0;
$pg_layer->StartTransaction();
while ( my $feature = $layer->GetNextFeature() ) {

    my $new_feature = Geo::OGR::Feature->new($defn);
    $new_feature->SetFrom($feature);
    my $pgf = $pg_layer->CreateFeature($new_feature);

    $x += 1;
    if ( $x % 128 == 0 ) {
        carp $x;
        # leaving this uncommented causes a crash.  Bug?
        # $pg_layer->CommitTransaction();
        $pg_layer->StartTransaction();
        $x = 0;
    }

}
if ($x) {
    carp "all done, $x remaining";
    # curiously, this call to CommitTransaction works okay
    $pg_layer->CommitTransaction();
    carp "last transaction committed";
}
1;

From simple examples to complicated real world cases

I have a really irritating use-case for a CouchDB view. I have several hundred million documents representing hourly data for 4km grid cells in California, and I need to group them by areas. For example, grid cell i=100, j=223 is in Mendocino County, and in the “NORTH COAST” air basin. Of course I have the geometry of the grid cells and the geometry of the counties, air basins, and so on, in PostgreSQL/PostGIS, and I usually just shoot off a query to get the relationship and I’m done. This is CouchDB, however, and views cannot rely on external information lest they become idemimpotent (I made that up). Everything that the view needs must be in the view from the start.

Fair enough, I set up the SQL queries and generated my 9,800+ row JavaScript hash lookup table that maps grid cell to various areas of interest. Now I want to mix that into the view without pulling my hair out.

There is a really simple example in the CouchDB wiki. I’ve reproduced it below:

 {
   _id:"_design/test",
   language: "javascript",
   whatever : {
     stringzone : "exports.string = 'plankton';",
     commonjs : {
       whynot : "exports.test = require('../stringzone')",
       upper : "exports.testing = require('./whynot').test.string.toUpperCase()"
     }
   },
   shows: {
     simple: "function() {return 'ok'};",
     requirey : "function() { var lib = require('whatever/commonjs/upper'); return lib.testing; };"
   },
   views: {
     lib: { 
       foo: "exports.bar = 42;" 
     },
     test: { 
       map: "function(doc) { emit(doc._id, require('views/lib/foo').bar); }"
     }
   }
  }

So where the above example says foo: "exports.bar = 42;", I want to add in my massive hashtable. Obviously cutting and pasting so many lines is not the way to go. Instead, I’m using a couchapp tool.

The concept of a couchapp used to get more press that it currently seems to, but the basic idea is to use code to load up your design doc with attachments and views. In my case, I couldn’t care less about the attachments and the notion of a webapp stored and served by CouchDB. I just want to programmatically construct the view document, and push it to CouchDB. I chose to use node.couchapp.js. I could also have "rolled my own", and in fact I probably will this afternoon. I am playing around with grunt, so I used grunt_couchapp (after patching it a bit to use cookie based authentication).

The basic structure of my directory is the following


config.json
package.json
Gruntfile.js
app.js
lib
├── cellmembership.json
└── dump_membership.js
node_modules
├── ...
└── ...

The config.json file contains my database details, including my username and password. package.json contains the npm dependencies, mostly containing what was pulled in by the grunt_couchapp tool, and the node_modules directory holds all the node modules. I do not have an _attachments directory, so I make sure my design doc has no attachments!

Before getting to app.js, in which the design document is defined, I will first talk about what goes into it. The lookup table is stored as a JSON object in lib/cellmembership.json. The contents looks like:

{ "100_223":{"airbasin":"NORTH COAST","bas":"NC","county":"MENDOCINO","fips":"23","airdistrict":"MENDOCINO COUNTY AQMD","dis":"MEN"},
 "100_224":{"airbasin":"NORTH COAST","bas":"NC","county":"MENDOCINO","fips":"23","airdistrict":"MENDOCINO COUNTY AQMD","dis":"MEN"},
   ... 9,890 more lines like this ...
 "304_48":{"airbasin":"SALTON SEA","bas":"SS","county":"IMPERIAL","fips":"13","airdistrict":"IMPERIAL COUNTY APCD","dis":"IMP"},
 "98_247":{"airbasin":"NORTH COAST","bas":"NC","county":"HUMBOLDT","fips":"12","airdistrict":"NORTH COAST UNIFIED AQMD","dis":"NCU"}
}

The view code that uses this file is saved to lib/dump_membership.js, and looks like:

module.exports = function(doc){
    var lookup = require('views/lib/cellmembership').lookup
    emit(lookup[doc.cell_id].county, doc.value)
}

These two pieces are put together in app.js, that looks like this:

var couchapp = require('couchapp')
var cellmembership = require('./lib/cellmembership.json')
var mapfun = require('./lib/dump_membership')

var ddoc = {
    _id: '_design/calvad',
    rewrites: [{
      from: '',
      to: 'index.html',
      method: 'GET',
      query: {}
    },{
      from: '/*',
      to: '/*'
    }],
    views: {
        "lib":{
            "cellmembership":"exports.lookup="+JSON.stringify(cellmembership)
        },
        "test":{
            "map":mapfun
        }
    },
    lists: {},
    shows: {}
};


module.exports = ddoc;

So instead of "exports.bar=42;", I put in "exports.lookup="+JSON.stringify(...). The key insight that the simple example didn’t really convey is that you want your entire "library" module to be a string. So in this case that means saving my JSON lookup document as a string using JSON.stringify. I probably could have just loaded it directly using fs.readfile(), but I like this way, because it soothes my worries about malformed JSON. If the JSON is screwed up, the app.js won’t run, and the failure happens right away, not in the midst of cranking through hundreds of millions of documents.

The other bit that I didn’t get from the example was how to include an external function in the design document. What I did was pretty simple, and it worked. I just did "map":mapfun. This is exactly the opposite of what needed to be done with the views:lib:cellmembership.. construct. There the exports.lookup= statement needs to be a string inside of the JavaScript, whereas the assignment of the map function needs to be actual JavaScript code, not the string representation of that code.

This is exactly the kind of inconsistency that drives me nuts and that nobody ever thinks to document, because only crazies like me run into those edge cases.

Take that, cryptic error message

Sometimes when you have a program that works fine for weeks and weeks, it still has bugs that crop up for no apparent reason. Yesterday I ran into that sort of irritating situation, but I learned some stuff and so I’m writing this up so that there is one more possible solution paired to a cryptic error message for the search engines to suck up.

The situation

I am running a geospatial modeling job to estimate variables in time and space. There are a lot of little grids to process, and each needs a model run for each hour. Continue reading

How I fire off multiple R jobs from node.js

Node.js has become my hammer of choice for most systems programming type jobs. In an earlier post I talked about how to use CouchDB to store the progress and state of jobs that need doing. Here I will demonstrate how I trigger those jobs and update CouchDB using a fairly simple node.js program.

Two key features of node that makes this program possible are spawn and being able to read and manipulate the environment variables.

var spawn = require('child_process').spawn
var env = process.env

Node.js is fully capable of using child processes. One can choose from exec, execFile, spawn, and fork. For my usage, the spawn function does exactly what I want—it creates a child process that reports back when it exits.

The other useful tool is the ability to access the current running environment using the process.env variable. This allows my program to take note of any environment variables that are already set, and to fill in any missing variables that my child process might need.

Concurrency via queue

Using spawn one can fire off as many jobs as desired. Suppose you have a machine with four cores, then calling spawn four times will efficiently use your processing power. Unfortunately it isn’t usually that simple. Instead, what typically happens is that you have a lot of separable data crunching tasks that need to be run, and you want to have four data processing jobs running at all times until the work is all done. To accomplish this, the spawn function will need to be called four times (to fill up the processors) and then will need to spawn a new job whenever one of the existing jobs finishes.

Continue reading

Using CouchDB to store state: My hack to manage multi-machine data processing

This article describes how I use CouchDB to manage multiple computing jobs. I make no claims that this is the best way to do things. Rather I want to show how using CouchDB in a small way gradually led to a solution that I could not have come up with using a traditional relational database.

The fundamental problem is that I don’t know what I am doing when it comes to managing a cluster of available computers. As a researcher I often run into big problems that require lots of data crunching. I have access to about 6 computers at any given time, two older, low-powered servers, two better servers, and two workstations, one at work and one at home. If one computer can’t handle a task, it usually means I have to spread the pain around on as many idle CPUs as I can. Of course I’ve heard of cloud computing solutions from Amazon, Joyent, and others, but quite frankly I’ve never had the time and the budget to try out these services for myself.

At the same time, although I can install and manage Gentoo on my machines, I’m not really a sysadmin, and I really can’t wire up a proper distributed heterogeneous computing environment using cool technologies like Ømq. What I’ve always done is human-in-the-loop parallel processing. My problems have some natural parallelism—for example, the data might be split across the 58 counties of California. This means that I can manually run one job per county on each available CPU core.

This human-in-the-loop distributed computer model has its limits however. Sometimes it is difficult to get every available machine to have the same computational environment. Other times it just gets to be a pain to have to manually check on all the jobs and keep track of which are done and which still need doing. And when a job crashes halfway through, then my manual method sucks pretty hard, as it usually means restarting that job from the beginning.

Continue reading

Using superagent to authenticate a user-agent in node.js, plus a bonus bug!

Summary

This post describes how I use the superagent library to test access to restricted resources on my web server. This is something that I found to take a bit more effort than I expected, so I thought I’d write this up for the greater good.

Context

I am running a website in which some resources are open to the internet, while others require authentication versus our CAS server.

I have been logging into the CAS server using request. But in the interest of trying out different libraries and all that, I decided to rewrite my
method using superagent.

I need to log into the CAS server from node.js because I am writing tests that verify that the protected resources are hidden to non-authenticated users, and available to authenticated ones. Continue reading

Development server logs during development

In a prior post trumpeting my modest success with getting geojson tiles to work, I typed in my server address, but didn’t make it a link. That way robots wouldn’t automatically follow the link and my development server wouldn’t get indexed by Google indirectly.

What is interesting to me is that I still get the occasional hit from that posting. And this is with the server bouncing up and down almost continuously as I add functionality. Just now I was refactoring the tile caching service I wrote, and in between server restarts, someone hit my demo app.

And the GeoJSON tiler is coming along. In making the caching part more robust, I added a recursive directory creation hack which I explain below.

Continue reading

R. Struggle with it and it becomes clear.

Been using R almost exclusively for the past few weeks. I’ve always liked R, but I find that the syntax and style maddeningly slow to ingest. Perhaps everybody is like this, but I’ve found that some programming language idioms I take to pretty readily (JavaScript and Perl), some I hate (Java before generics and Spring IOC was odious, after it is at least tolerable), and others I just have to fight through a few weeks of doing things utterly wrong.

R falls in that last camp, but since I used to be pretty good at it back when I was working on my dissertation, I’ve always considered it to be my goto stats language. So now that I have a major deliverable due, and it really needs more advanced statistics than the usual “mean/max/min/sd” one can usually throw at data, I’ve taken the plunge back into R syntax once again.

I’m building up scripts to process massive amounts of data (massive to me, perhaps not to Google and Yahoo, but a terabyte is still a terabyte), so each step of these scripts has to be fast. So periodically I come across some step that is just too slow, or something that used to be fast but that slows down as I add more cruft and throw more data at it, it bogs down.

Here is an example of how R continues to confound me even after 3 weeks of R R R (I’m a pirate, watch me R). Continue reading