Uber, whatever its faults, provides value to its users, both the drivers and the riders. People appreciate or even enjoy the service, even if they don’t like the corporate behavior or economic disruption.   Solutions seem to mostly include boycotts (in favor of taxis or competitors like Lyft) and legal action.  But most of those those solutions are pushing water uphill, because people actually like the service.

I have another solution: let’s rebuild the service without any major company involved. Let’s help software eat the world on behalf of the users, not the stockholders. In this post, I’ll explain a way to do it.  It’s certainly not trivial, and has some risks, but I think it’s possible and would be a good thing.

The basic idea is similar to this: riders post to social media describing the rides they want, and drivers post about the rides they are available to give.  They each look around their extended social graph for posts that line up with what they want, and also check for reasons to trust or distrust each other. That’s about it. You could do this today on Twitter, but it would take some decent software to make it pleasant and reliable, to make the user experience as good as with Uber.  To be clear: I’m aiming for a user experience similar to the Uber app; I’m proposing using social media as an underlying layer, not a UI.

What’s deeply different in this model is the provider of the software does not control the market. If I build this today and get millions of users, someone else can come along tomorrow with a slightly better interface or nicer ads, and the customers can move easily, even in the middle of a ride.  In particular, the upstart doesn’t need to convince all the riders and driver to switch in order to bootstrap their system!  The market, with all its relationships and data are outside the ridesharing system. As a user, you wouldn’t even know what software the other riders and drivers are be using, unless they choose to tell you.

With this approach, open source solutions would also be viable.  Then the competition could arise quite literally tomorrow, as someone just forks a product and makes a few small changes.

This is no fun for big money investors looking for their unicorn exit, but it’s great for end users.  They get non-stop innovation, and serious competition for their business.

There are many details, below, including some open issues.  The details span areas of expertise, so I’m sure I’ve gotten parts incomplete or wrong. If this vision appeals to you, please help fill it in, in comments or posts of your own.

Critical Mass

Perhaps the hardest problem with establishing any kind of multi-sided market is getting a critical mass of buyers and sellers.  Why would a rider use the system, if there are not yet any drivers?  Why would drivers bother using the software when there are no riders? Each time someone tries the system, they find no one else there and they go away unhappy.

In this case, however, I think we have some options.   For example, existing drivers, including taxi operators, could start to use the software while they’re doing the driving they already do, with minimum additional effort.  Reasons to do it, in addition to optimistically wanting to help bootstrap this: it could help them keep track of their work, and it could establish a track record for when riders start to show up.

Similarly, riders could start to use it in areas without drivers if they understand they’re priming the pump, helping establish demand, and perhaps there were some fun or useful self-tracking features.

Various communities and businesses, not in the ridesharing business, might benefit from promoting this system: companies who have a lot of employees commuting, large events where parking is a bottleneck, towns with traffic issues, etc.  In these cases, in a niche, it’s much easier to get critical mass, which can then spread outward.

Finally, there are existing ridesharing systems that might choose to play in this open ecosystem, either because their motivation is noncommercial (eg eco carpooling) or because they see a way to share the market and still make their cut (eg taxi companies).


In the model as I’ve described it so far, there’s no privacy. If I want a ride from San Francisco to Sebastopol, the whole world could see that. My friends might ask, uncomfortably, what I was doing in Sebastopol that day. This is a tricky problem, and there might not be a perfect solution.

In the worst case, the system ends up as only viable for the kind of trips you’re fine being public, perhaps your commute to work, or your trip to an event you’re going to post about anyway. But we can probably do better than that.  I currently see two imperfect classes of solution:

  1. Trust some third party organizations, perhaps to act as information brokers, seeing all the posts from both sides and informing each when there is a match, possibly masking some details. Or perhaps they certify drivers, which gives them access to your data, with an enforceable contract they’ll use it for these purposes only.
  2. Trust people to act appropriately when given the right social cues and pressure: basically, use advisory  access control, where anyone can see the data, but only after they clearly agree that they are acting as part of the ridesharing system and that they will only use the data for that purpose. There might be social or legal penalties for violating this agreement.

There might also be cryptographic solutions, perhaps as an application of homomorphic encryption, but I’m not yet aware of any results that would fully address this issue.

Personal Safety

When I was much younger, hitchhiking was common. If you wanted to go somewhere without having a car, you could stand on the side of the road and stick out your thumb. But there was some notion this might be dangerous for either party, and in some places it became illegal. (Plus there was that terrifying Rutger Hauer and C Thomas Howell movie.) There have been a few stories of assaults by Uber drivers, and the company claims to carefully vet drivers. So how could this work without a company like Uber, standing behind the drivers?

There are several approaches here, that can all work together:

  1. Remote trust assessment. Each party should be able to see data on the other before agreeing to the ride.  This might include social graph connections to the other person, reviews posted about the other person (and by whom), and official certifications about the other person  (including even: the ride is from a licensed taxicab).  When legally permissible this should, I think, even include information that might be viewed as discriminatory, like Bla Bla Car’s “Ladies Only” setting. It’s a tough trade-off, but I don’t think in a system like this anyone should be forced to ride with someone they’re not comfortable with. Hopefully an active and diverse enough market will allow everyone to get a ride.
  2. Immediate trust assessment. The expectation has to be set that people can back out of the deal at any point.  The person drives up and their car doesn’t look like in the picture?  You were expecting two passengers and there are three?  The driver didn’t turn when you expected them to? In each of these cases, there needs to be a clear, “Actually, no thanks” mechanism, for ending the process right there. Perhaps the person doing the cancellation incurs a small fee, 5-10 minutes salary; not enough that they would endanger their safety for money, but enough to stop frivolous cancellations. (The actual terms would be settled in messaging between apps before the ride.)
  3. Accountability and evidence. Each part posts details of the arrangement and progress of the ride, so if something inappropriate does happen, there is a trail of evidence.  There can be 3rd party log-bots which archive the details in case one of the parties decides to delete or edit theirs (on systems which support those operations). There can be logged photos and even live streams, and a culture than encourages that.  You post a geotagged photo of the car arriving to pick you up, and the driver, because that’s part of a convincing review; it just happens that postings like that would make it much harder to get away with any crime involving the system.


One of the great things about the Uber experience is not having to think about money, dig for cash, or decide how much to tip the driver. For me, personally, it feels good to end the ride with thanks, instead of payment, even though I know I’m actually paying.

I think this part’s pretty easy to decentralize.  Each party can post the details about pricing and the payment mechanisms supported, including cash, check, credit card, paypal, venmo, square cash, and bitcoin.  Some of these resolve or settle later, but the same reputation/trust mechanism used for personal safety should, I think, be able to handle this.  If the transaction is canceled hours after the ride, the aggrieved party should be able to make evidence of this clear in their review.

In the worst case, there could be safe-payment services that agree to absorb some of the risk of the transaction, giving more guarantees to both parties, in exchange for higher fees.   One of the companies trying to compete with Uber today might consider going into this business, perhaps along with the driver-certification business.  They can be like IBM supporting open source to beat back the Microsoft monopoly: they’ll never have the scale to beat Uber by themselves, but if they join the team of “everyone else”, they can probably carve out a nice market segment for themselves.

Now we turn to details about the use of social media to decentralize applications.


Which social media should these ridesharing apps use for posting their announcements and looking for announcements from others?   Twitter, Facebook, Mastodon, Instagram, … or something set up just for this service?  LinkedIn?

I think the answer is: all of the above, if they include the necessary functionality, as detailed below.  Right now, I think that’s Twitter and Mastodon, but there may be a way to make other systems behave as needed.

Context Collapse

Of course, no one wants to see a lot of irrelevant ridesharing posts. This is a special case of the general problem of Context Collapse: when we try to unify communications, to get economies of scale and critical mass, we can end up involved in a lot of unwanted communications.

I see three solutions here. I’m most optimistic about the last one:

  1. Keep the systems separate.  Make a separate social networking system for ridesharing. But that would be hard, and ridesharing is only one of many, many applications we’d like to decentralize. If there’s a huge hassle and/or expense for each one, we won’t get that decentralization.
  2. Use separate accounts for each user for each app.  I could use sandhawke_ridesharing on Twitter, etc, to keep this traffic separate. But it will still show up in search results, and I actually want to be able to leverage the existing social graph, not make a wholly new one.
  3. Zones, or opt-in hashtags. With this approach, “zone posts” are only seen by people who are looking for them and systems which are built to use them. This is worthy of a post of it’s own, so I’ll do that soon.


What would the actual posts look like?  Current software industry trends would suggest something like {“rideFrom”:{“lat”:42.361958, “lon”:-71.09122}}. This kind of JSON data works well when one organization controls the data format, but it breaks down when people do not all agree on all the syntactic and semantic details of the format.  Even if people are inclined to agree, actually getting it well specified is difficult and time consuming.  (Ask anyone who’s been involved in a data interchange standardization effort.)

A somewhat more palatable approach would be be JSON-LD, with multiple schemas from different sources, and an understanding that consumers need to know how to work with them all.

My favorite approach is to use natural language sentence templates. Instead of all agreeing we’ll use JSON and the ‘rideFrom’ property, etc, apps use strings that read like natural language sentences, conveying exactly what they mean, but which can easily be parsed using declared input templates. This concept also needs a post of its own, so I’ll go into that separately.


It’s great if I can catch ride from an existing contact, but in most cases, I will need to reach farther out my social graph.  Some systems, like Twitter, make that public.  Some don’t.  Twitter’s graph, however, is not endorsement or trust.  I follow a few unpleasant and untrustworthy people to keep track of what they’re up to. So we need some trust data in the social graph. This is hard.

Some wild ideas:

  1. Before relying on a path through the social graph, software could ping all the people on that path asking them to confirm these connections are based on trust.   But often responses will be too slow, if they come at all.  Still, at least the responses could be used later. People might be motivated to respond by having an important and trusted role in the process. On the other hand, people might hesitate to disclose how much they trust or fail to trust some other people.
  2. Sentiment analysis of replies?
  3. Confirming by using multiple social networks?  For me, LinkedIn connections probably convey the most trust, but I understand that varies.
  4. More ideas…?   How do we draw out people’s judgments of who else is trustworthy?

An alternative to having the social graph be visible is to have user-configured bots which automatically boost some posts on the user’s behalf.  If a friend of mine is looking for a ride, my bot can post that my friend is looking for a ride, etc.

Other Details?  Other Issues?

That’s what I’ve got.   Looking back, I can see it’s quite a lot.  Have I made anything harder than it needs to be? Is there a nice MVP that can ignore the complex issues? Are there other problems I’m missing?

Simplified RDF

November 10, 2010

I propose that we designate a certain subset of the RDF model as “Simplified RDF” and standardize a method of encoding full RDF in Simplified RDF. The subset I have in mind is exactly the subset used by Facebook’s Open Graph Protocol (OGP), and my proposed encoding technique is relatively straightforward.

I’ve been mulling over this approach for a few months, and I’m fairly confident it will work, but I don’t claim to have all the details perfect yet. Comments and discussion are quite welcome, on this posting or on the semantic-web@w3.org mailing list. This discussion, I’m afraid, is going to be heavily steeped in RDF tech; simplified RDF will be useful for people who don’t know all the details of RDF, but this discussion probably wont be.

My motivation comes from several directions, including OGP. With OGP, Facebook has motivated a huge number of Web sites to add RDFa markup to their pages. But the RDF they’ve added is quite constrained, and is not practically interoperable with the rest of the Semantic Web, because it uses simplified RDF. One could argue that Facebook made a mistake here, that they should be requiring full “normal” RDF, but my feeling is their engineering decisions were correct, that this extreme degree of simplification is necessary to get any reasonable uptake.

I also think simplified RDF will play well with JSON developers. JRON is pretty simple, but simplified RDF would allow it to be simpler still. Or, rather, it would mean folks using JRON could limit themselves to an even smaller number of “easy steps” (about three, depending on how open design issues are resolved).

Cutting Out All The Confusing Stuff

Simplified RDF makes the following radical restrictions to the RDF model and to deployment practice:

  1. The subject URIs are always web page addresses. The content-negotiation hack for “hash” URIs and the 303-see-other hack for “slash” URIs are both avoided.

    (Open issue: are html fragment URIs okay? Not in OGP, but I think it will be okay and useful.)

  2. The values of the properties (the “object” components of the RDF triples) are always strings. No datatype information is provided in the data, and object references are done by just putting the object URI into the string, instead of making it a normal URI-label node.

    (Open issue: what about language tags? I think RDFa will provide this for free in OGP, if the html has a language tag.)

    (Open issue: what about multi-valued (repeated) properties? Are they just repeated, or are the multiple values packing into the string, perhaps? OGP has multiple administrators listed as “USER_ID1,USER_ID2”. JSON lists are another factor here.)

At first inspection this reduction appears to remove so much from RDF as to make it essentally useless. Our beloved RDF has been blown into a hundred pieces and scattered to the wind. It turns out, however, it still has enough enough magic to reassemble itself (with a little help from its friends, http and rdfs).

This image may give a feeling for the relationship of full RDF and simplified RDF:

Reassembling Full RDF

The basic idea is that given some metadata (mostly: the schema), we can construct a new set of triples in full RDF which convey what the simplified RDF intended. The new set will be distinguished by using different predicates, and the predicates are related by schema information available by dereferencing the predicate URI. The specific relations used, and other schema information, allows us to unambiguously perform the conversion.

For example, og:title is intended to convey the same basic notion as rdfs:label. They are not the same property, though, because og:title is applied to a page about the thing which is being labeled, rather than the thing itself. So rather than saying they are related by owl:equivalentProperty, we say:

  og:title srdf:twin rdfs:label.

This ties to them together, saying they are “parallel” or “convertable”, and allowing us to use other information in the schema(s) for og:title and rdfs:label to enable conversion.

The conversion goes something like this:

  1. The subject URLs should usually be taken as pages whose foaf:primaryTopic is the real subject. (Expressing the XFN microformat in RDF provides a gentle introduction to this kind of idea.) That real subject can be identified with a blank node or with a constructed URI using a “thing described by” service such as t-d-b.org. A little more work is needed on how to make such services efficient, but I think the concept is proven. I’d expect facebook to want to run such a service.

    In some cases, the subject URL really does identify the intended subject, such as when the triple is giving the license information for the web page itself. These cases can be distinguished in the schema by indicating the simplified RDF property is an IndirectProperty or MetadataProperty.

  2. The object (value) can be reconstructed by looking at the range of the full-RDF twin. For example, given that something has an og:latitude of “37.416343”, og:latitude and example:latitude are twins, and example:latitude has a range of xs:decimal, we can conclude the thing has an example:latitude of “37.416343”^^xs:decimal.

    Similarly, the Simplified RDF technique of puting URIs in strings for the object can be undone by know the twin is an ObjectProperty, or has some non-Literal range.

    I believe language tagging could also be wrapped into the predicate (like comment_fr, comment_en, comment_jp, etc) if that kind of thing turns out to be necessary, using an OWL 2 range restrictions on the rdf:langRange facet.

  3. Next Steps

    So, that’s a rough sketch, and I need to wrap this up. If you’re at ISWC, I’ll be giving a 2 minute lightning talk about this at lunch later today. But if you’ve ready this far, the talk wont say say anything you don’t already know.

    FWIW, I believe this is implementable in RIF Core, which would mean data consumers which do RIF Core processing could get this functionality automatically. But since we don’t have any data consumer libraries which do that yet, it’s probably easiest to implement this with normal code for now.

    I think this is a fairly urgent topic because of the adoption curve (and energy) on OGP, and because it might possibly inform the design of a standand JSON serialization for RDF, which I’m expecting W3C to work on very soon.

What Is The Web?

October 27, 2008

This question doesn’t usually matter very much, but sometimes we get into arguments about whether something (eg e-mail, streaming video, instant-messaging) is part of “The Web” or not.

Due to various circumstance last week I found myself drafting this text about it (while kicking myself and telling myself it was a waste of time). Jet-Lag Insomnia, more or less.

I think the main value of my text, or any effort like this, is in revealing our often-unstated ideas about what the Web can or should become. Is this “architecture”? When you renovate a building, adding new wings, how much are you changing its architecture? How much are you changing its subtle good and bad features? Certainly it’s good to think about that question before making the changes.

It is also nice to get our terms straight. I loathe the term “Resource” and quite dislike the term “URI”. Along the way here, I try to define what I think are the key terms in Web Architecture. For extra credtit, I end up with defining a “Web Standard”.

Wikipedia says the Web is:

…a system of interlinked hypertext documents accessed via the Internet.

In contrast, WebArch says it is:

…an information space in which the items of interest, referred to as resources, are identified by global identifiers called Uniform Resource Identifiers (URI)

Without going into my dislike for WebArch, I really wish the Semantic Web would/could (somehow) stick to a definition closer to Wikipedia’s.

My one-line defintion is this:

The Web is a global communications medium provided by a decentralized computer system.

In more detail:

“The Web is a global…”: Although conceptually there could be many different “webs”, there is one which is understood as “The Web”. The Web follows (and uses) The Internet in being designed to connect different local systems. An installation of web technology usually ends up connected to others, becoming part of the unified global Web, because in most situations the value of doing so greatly outweighs the costs. The end effect is a single, integrated system built up of all available, connected components.

“communications medium” : The Web provides a way for people to communicate with each other. It does this by letting them create web pages (often collected into web sites), with unique names (the web address, or URL), which other people can view and interact with. The system does not restrict what exactly constitutes a “page” (sometimes called a “resource): originally, Web pages were essentially an on-line of paper documents, but they have evolved to now provide, within each “page”, a full user interface to remote computer systems. The web addresses are essential, because they allow people to communicate about particular pages and, crucially, they allow one page to name another (to link to another) so that user can learn about and “visit” (use) other pages.

Although generally intended for use by people, the Web is sometimes used by other computer systems. A search engine traverses the Web like a user, and then helps users find the pages they want. The Web Services and Semantic Web standards provide various ways for computer systems to interact with each other over the Web, attempting to leverage Web infrastructure as an element in new systems.

“decentralized computer system”: While the Web is in one sense a single system, it is composed of other computer systems, most of which serve as web servers or web clients. It has no central point of control (except perhaps the Domain Name System (DNS), which is part of the underlying Internet); instead, the system’s behavior for a particular user depends on the clients and servers being used by that user. Many features of the Web rely the behavior being essentially the same for all users, and that consistency depends on the underlying systems behaving consistently. Where there is consistent behavior, and that behavior is documented, the document is a Web Standard.