Learning about blockchain; starting with some ICO events

I’ve decided to dive deeper into blockchain and Etherium. Because I’m a transportation guy, I did a search for ICOs in the transportation space and came up with TrafficX, ZeroTraffic, and DOVU. I’ve ranked them from sucktastic to slightly silly.

TrafficX

TrafficX looks like a complete scam. I can’t see anything solid anywhere, and their development plan seems to say “Pay us a lot of Ether and we’ll hire some developers. Get in early!” Their initial offering has closed and they raised some money. If it isn’t a scam, we should see some results soon. If it is a scam, I guess the value of the ICO will eventually drop to zero.

ZeroTraffic

ZeroTraffic at least has people behind it who have been in the transportation space for years, but I think their idea is weak. From my read-it-once-through, not super-careful reading of their whitepaper, they are basically using their ICO to allow people to buy “deputy mayor of traffic” positions for their local areas. They will use the funds from their ICO to pay for further development of a website and apps for the gamification of traffic choices, and will use the (hopefully relevant and insightful) input of the local traffic mayors to get people to travel in off-peak times. In my opinion, this is a dumb idea, will not at all result in “Zero Traffic”, and is a bad investment on the part of the ICO investors. On the other hand, the team is honest about their idea, and will make a good faith effort to move the idea from concept to reality. This ICO certainly isn’t a scam.

DOVU

Finally, DOVU has big rhetoric but more prosaic (and achievable) goals. Their slogan is “Blockchain powered mobility”, but in their whitepaper it turns out they are just trying to monetize the sharing of traffic information. I get the feeling they were casting about for a good idea in the mobility space, and came up with this after several brainstorming sessions. But I also get the feeling that they have mostly computer scientists on their team, rather than transportation economists. They are correct that their technology will likely make it possible to monetize sharing data about one’s trips, but I think they are incorrect in their presumption that this information will have any intrinsic value.

This blind spot arises because we as a society are still transitioning from a world of scarce traffic information to one of plenty. Just ten years ago it would have been unthinkable for a transportation engineer to expect to have accurate, up to the minute estimates of travel times on city streets. Major highways have been well instrumented for decades using loop detectors, but even that data can be spotty and is notoriously bad at estimating speeds and travel times. In the early days of intelligent transportation systems (ITS) there were many business ideas floated based on the idea that specially instrumented probe vehicles with wireless data transponders could be used to collect traffic data. That travel data could then be sold to travelers who wished to know the latest and greatest traffic conditions and the best routes to their destination. Today it is possible to view current traffic conditions for both highways and the major surface streets, and services like Google maps will (for free) provide turn by turn navigation that includes advice about traffic incidents, the expected severity of traffic incidents, and whether a better alternative route exists.

(As an aside, I never believed any of that was necessary, and I wrote a paper back in 2002 in which I argued that regular people could just post traffic conditions on roads much like bloggers post reviews of restaurants. It was rejected even for presentation at TRB that year with a scathing review. I was pissed because the review was so obviously biased towards the current status quo (big top-down ITS projects), and so after that I just gave up on writing papers and publishing in transportation. Ironically, I also kinda described how Waze works with that paper, which would make me feel vindicated if I had gone on to found Waze and now had millions of dollars in the bank, but I didn’t and I don’t, so I’m just bitter instead!)

Anyway, back to the critique of DOVU, now that every vehicle with a cellphone is a traffic probe data collector, DOVU looks at that as an opportunity to “get paid to drive” by selling the information you collect. Unfortunately, I think they are unintentionally mixing up both the past and the present. In the past, traveler information was scarce and therefore valuable. Today, when every cellphone is broadcasting location data to the wireless provider, to the phone’s OS itself, and to any app that is running with permission to track location, traveler information is abundant, not scarce, and therefore is worth very little.

The downsides for DOVU are two-fold. First, even if DOVU operated in a vacuum, if everybody were to participate in their system, the basic rules of competitive markets dictate that marginal cost pricing will hold, and the marginal cost of one additional traveler’s information is next to zero. Second, DOVU does not and will not operate in a vacuum. It is trying to break into a space in which many players are already collecting a wealth of traveler information, with no way to force the current market players to use their system. There is nothing in the DOVU setup that will in any way prevent the many actors who are already gathering traveler information to continue to do so. All that information that is currently circulating without DOVU will continue to do so outside of the DOVU walled garden, keeping the same arrangement that providing that information is part of the cost of using a phone, getting turn by turn directions, and so on. DOVU participants will be competing to sell their traffic information with each other as well as with everybody outside of the DOVU ecosystem (who are basically giving their information away for free). Thus there will be next to zero benefit to any purchaser for any DOVU user’s travel data, and so the market clearing price of all this collected and monetized information will be zero.

To their credit, DOVU seems to be thinking about information beyond just travel times and speeds. In presentations and in the white paper, they imply accessing the vehicles’ information systems to gather data such as using windshield wipers, emergency braking activation and so on, and suggest that those secondary measurements can be used to reveal hidden truths such as weather conditions and traffic hazards. In practice, I don’t think so. We have weather stations for the former, Waze users for the latter. There might be some facet of travel that could be measured that might be interesting, but I doubt very much that anything would be valuable enough to satisfy DOVU’s implied promise that drivers could get paid to drive.

All those criticisms aside, I actually like their plan a little better than the ZeroTraffic idea. Whereas ZeroTraffic claims that they will solve traffic, DOVU sticks to the more prosaic claim that you can get paid to reveal your data. It could be in the future that cellphones will no longer report faithfully back to their corporate sponsors, perhaps because anonymous, blockchain-based payment systems have made it possible to conduct a phone conversation or data transaction without needing to keep tracking the mobile device. Or perhaps because some other bit of information will become as valuable as what DOVU believes transportation data bits are now. I’m not a fan of their tag line “blockchain powered mobility” as they do not directly provide mobility at all, but in the future DOVU could pivot based on an opportunity that presents itself and they could have a winning proposition. In contrast, ZeroTraffic is stuck on its path once it has sold its “deputy mayor of traffic” positions.

Me me me

My interests in applying blockchain techniques to transportation are based more in transportation economics than in engineering or computer science, and I haven’t yet seen anybody trying to implement my thoughts, so there’s still hope for my startup idea. Unfortunately, my idea will likely end up being an over-planned scheme like ZeroTraffic, and less a generic enabling technology like DOVU. Without giving anything away, I’m interested in dealing with that marginal cost pricing problem. I’m not going to blab about my ideas until I get more clue about blockchain and Etherium and all the other related tech. Maybe in a few months I’ll float my own ICO with my own team, and face my own round of critical blog posts pointing out how clueless my ideas are.

PS, If anybody from DOVU reads this and is interested in collaborating, shoot me a message and I’ll share more of my ideas. Or just recruit Professor John Polak from Imperial College London for your advisory board.

Advertisements

stupid patents

Okay, Google just patented automated delivery vehicles. Dumb. Car with a lock on it. Not hard, super obvious. US009256852

And to paraphrase Mr. Bumble, “If the law supposes [that this kind of invention is patentable before we even have widespread use of driverless cars], then the law [(and Google)] is a ass—a idiot.”
Continue reading

At stage 3 with self-driving cars

I recently wrote that self-driving cars were inevitable and would change nearly everything about our understanding of traffic flow and how the demand for travel (a person wanting to be where he or she is not) will map onto actual trips. We’re planning using the old models, which are sucky and broken, but now they are even more sucktastic and brokeriffic.

Today in the LA Times business section1 an article reports that a “watchdog” group2 is petitioning the DMV to slow down the process of adopting self-driving cars. It struck me that this act is very similar to bargaining, which means we’re at the 3rd stage of grief.

The first stage is denial. “It can never happen.” “Computers will never be able to drive a car in a city street.” Over. Done. Proven wrong.

The second stage is anger. I haven’t seen that personally, but I have seen hyperbole in attacks like “what are you going to do when a robot chooses to kill innocent children on a bus”. A cross between stage one and stage two is probably this article from The Register.

The third stage is bargaining. The linked page above has the example of “just let me see my son graduate”. In this case, we’ve got “slow down to 18 months so we can review the data and make sure it is safe”. While I’m not suggesting we rush to adopt unsafe robot cars, it is interesting to see how quickly the arguments against self-driving cars has moved to stage 3.

I’m keeping an eye out for depression (old gear-heads blaring Springsteen’s Thunder Road while tinkering with their gas guzzling V-8s?) and then acceptance (we’ve got a robot car for quick trips around town, but we also have a driver car for going camping in the mountains).


  1. The link is the best I could find right now, but is exactly the same as the print article 
  2. The group non-ironically calls itself Consumer Watchdog! 

Why is there glitter on the floor?

Glitter

The light bouncing off the chair leg makes the ugly scratches in the floor sparkle like glitter.

I’ve spent many hours thinking about driverless cars, and have even drafted a few blog posts.  With the announcement the other day from Google, and the subsequent flurry of news coverage, it is time for me to join the party and get my thoughts out there.

A prediction

First, my prediction: Self-driving cars will become standard.

Continue reading

Using CouchDB to store state: My hack to manage multi-machine data processing

This article describes how I use CouchDB to manage multiple computing jobs. I make no claims that this is the best way to do things. Rather I want to show how using CouchDB in a small way gradually led to a solution that I could not have come up with using a traditional relational database.

The fundamental problem is that I don’t know what I am doing when it comes to managing a cluster of available computers. As a researcher I often run into big problems that require lots of data crunching. I have access to about 6 computers at any given time, two older, low-powered servers, two better servers, and two workstations, one at work and one at home. If one computer can’t handle a task, it usually means I have to spread the pain around on as many idle CPUs as I can. Of course I’ve heard of cloud computing solutions from Amazon, Joyent, and others, but quite frankly I’ve never had the time and the budget to try out these services for myself.

At the same time, although I can install and manage Gentoo on my machines, I’m not really a sysadmin, and I really can’t wire up a proper distributed heterogeneous computing environment using cool technologies like Ømq. What I’ve always done is human-in-the-loop parallel processing. My problems have some natural parallelism—for example, the data might be split across the 58 counties of California. This means that I can manually run one job per county on each available CPU core.

This human-in-the-loop distributed computer model has its limits however. Sometimes it is difficult to get every available machine to have the same computational environment. Other times it just gets to be a pain to have to manually check on all the jobs and keep track of which are done and which still need doing. And when a job crashes halfway through, then my manual method sucks pretty hard, as it usually means restarting that job from the beginning.

Continue reading

Public Planning Models

Craig and I just posted our entry into the Knight Newschallenge Lottery. It is called Public Planning Models, in a classic case of a working title ending up being the final title.

The basic idea is that planning models are opaque and mysterious, and really buggy and error prone. The problem isn’t the fault of the modelers or the model systems, but rather the lack of input data. Consider that a planning model first tries to model today’s world, and then tries to model the future using that same model with extrapolated conditions. There are two sources of error—the model of the present, and the extrapolation of that model into the future.

In a perfect, totalitarian state, the government would know everywhere you go, and all that information could be loaded into the model of the present. Calibration would be simple, because every vehicle is already in the model, so of course it captures reality. But even in a totalitarian, all-knowing state, predicting the future isn’t possible. Trends reverse themselves, people pick up different habits, and technology happens, changing the way we do things.

We have been watching and participating in the evolution of planning models, in particular pushing for the adoption of activity-based models over trip-based models. The big problem here is the burden of data collection, as well as the increased complexity of the model framework. Activity-based models are being incrementally adopted because they are too complicated and cost too much money to deploy.

Public Planning Models takes a different approach. Rather than trying to come up with better data collection processes and better modeling techniques, we thought it would be better to try to expose the full ugliness of current planning models to the public. This serves three purposes. First, people can see just how weak many of the fundamental assumptions in these models are. Second, everybody can take a look at the model system and suggest corrections and improvements, in the spirit of crowd-sourcing the model calibration step. And third, exposing the models and the applications of those models will give people an incentive to become more involved. That involvement can run the gamut from simply providing a few days worth of travel and activity data to the model’s input data set, to taking the model system itself and playing around with alternate planning scenarios.

Anyway, take a look at our proposal, add comments, and if you know one of the judges, put in a good word for our efforts. There are tons of submissions, and all of the ones I’ve read so far look pretty good.

Mode choice versus life cycle change

During TRB I attended a presentation on the effect of life cycle changes on travel pattern characteristics. The presenter defined the usual life cycle changes (getting married, changing home location, having a child, etc) and set up a structural equations model to related these changes with the size of a person’s social network, the length (distance) and number of trips per day, the length (duration) and number of activities per day, and so on.

The work was interesting and got me thinking whether one could treat “being green” as a life cycle choice rather than as a mode choice. In the usual mode choice context, Continue reading

Reduced parking requirements article

There is an article in today’s LA Times that talks about a move to reduce the parking requirements of various kinds of retail. This is very interesting and could begin to push people to reduce driving. In parallel, there are a few laws on the books in California that require denser development in order to reduce greenhouse gas emissions. Now denser development by itself will not reduce greenhouse gas emissions, and may in fact make things worse if everybody keeps driving to exactly what they do now (imagine…more destinations crammed into a smaller space means more cars on the same streets means more traffic means more emissions). But, if denser development is paired with reduced parking requirements, there is even more incentive to leave the car home for a trip or two (as there will be nowhere to park it when you get there).
Continue reading

Development server logs during development

In a prior post trumpeting my modest success with getting geojson tiles to work, I typed in my server address, but didn’t make it a link. That way robots wouldn’t automatically follow the link and my development server wouldn’t get indexed by Google indirectly.

What is interesting to me is that I still get the occasional hit from that posting. And this is with the server bouncing up and down almost continuously as I add functionality. Just now I was refactoring the tile caching service I wrote, and in between server restarts, someone hit my demo app.

And the GeoJSON tiler is coming along. In making the caching part more robust, I added a recursive directory creation hack which I explain below.

Continue reading