I’m disappointed in the pace of development of the Semantic Web, and I’m optimistic that the Lean Startup ideas can help us move things along faster.

I’ve been a fan of Eric Ries and the Lean Startup ideas for while, but last night I was lucky enough to get to see him speak, and to chat with some other adherents. There are a lot of ideas here, but the bit that jumps out at me today is this, loosely paraphrased:

Reality distortion fields are bad. Instead of using charisma, style, and emotions to motivate your colleagues to act on faith, motivate them with experimental evidence.

I think we have scant evidence that the Semantic Web will work, and that most of us have been working on this as an act of faith. We believe, without solid evidence, that it can work and will be a good thing when it does. You could say we’re operating in an RDF (resource description framework) RDF (reality distortion field).

The Lean Startup methodology says that we should get out of that field as quickly as possible, doing the fastest experiments possible that will teach us what really works and does not work. On faith we can do 5+ year projects, hoping to show something interesting. Instead, we should be doing ❤ month projects to test a hypothesis about how this is all going to be useful.

It’s a shame that most of us are funded in ways that don’t support or reward this at all. It’s a shame the research funding agencies operate on such a glacial and massive scale; in many ways they seem geared more towards keeping people busy and employed than actually innovating and producing knowledge for the world.

Below are my notes taken during Eric’s talk. I have not cleaned them up at all, so you can see just how badly my fingers spell “entrepreneur” when my brain has moved on to something else. I believe slides and the talk itself are available on line; it’s a talk he often gives, so if you have the time, watch it instead of just skimming my notes. (eg this one at Stanford.) Someone else with much better formatting and spelling posted their notes from last night’s talk. You probably want to read them instead, and then come back here and share your insights with us.


$$ Thu Jan 27 18:20:41 EST 2011 ((( EricReis

2 yrs ago at Hobies in palo alto, 6 people, first talking about this...

silicon valley is parochial, rarely getting out of the bublle.

#leanstartup

new conversation -- what is entrepreneurship.

strsaight from unhear of to overhypes, without people having learned about it.

put entreneurship on a more solid footing.

What is a startup?

     A startup is a human institution deseigned to delivera a new
     product or service under conditions of extreme uncertainty.

Nothing to do with size of company, sector of the economigy, or
industry.

ALL THE BORING STUFF, and how to get better at it.

    Startup = Experiment

Web 2.0 chart --- lots failed at 3 years.

they all failed for BAD reasons.

and how many really lived up to their potential....???!!!   SO FEW.

"If you do everything I did, you can fail like I did."

We need a giant industrial support group.

"Hi, I'm eric, and most of my startups failed."

It's all Taylor's fault.  :-)
father of scientific management.

1911.   birth of management
"In the past, the man was first.  In the future, the system will be first."

"Work should be done efficiently"
"Work should be divided into tasks"
"It's possible to organize craftsmen"
   Management by exception -- only have them report their exceptions.

  Now, decomposing work into tasks is 100 years old.

Everything in this room was constructed under the supervision of managers trained by Taylor and his disciples.

Shadow Beliefs:
   * We know what customers want   (reality distortion field)
   * We can accurately predict the future
          just dont believe the hockey stick spreadsheet
   * Andvancing the plan is progress
         eg keep everyone busy, write code, do your functional job!
	 -- if we're building something no one wants, is it progress!

           [[ NO -- real progress is LEARNING ]]


The Lean Revolution   (Lean Manufacturing)
    W E Demming,    Taiichi Ohno

it's not Tim Quality Money -- pick two

we can get all three by being customer focused.

Agile Development

Alas, Agile development comes out big IT departments.

works IF you know what the customer really needs.


Steve Blank.    
     Customer Development
     Agile (Product) Development

imvu story
      im networks -- join them all.

      he wrote this, in 6 months, to ship to customers.
      5 years before that.

      had to pivot to standalone network.

      GREAT code, but no one wanted it.

claim to have learn something -- about to get fired.  :-)

learning is a 4 letter word in management
     -- bad plan -- fired
     -- failure to execute -- fired

Ask yourself: IF my goal was to learn this, could I have done this
without writing the code?

YES --- just make the landing page!!!

As an entrepreneur, you NOT LONGER HAVE a FUNCTIONAL DISCINPINE.
   you do whatever you need to to get there

Entreprenursip is management

  + OUR GOAL is to create an institution, not just a product
  * traditona lamangement preactices fail.  (mba)
  * nee entrepeurial managemt -- working under extreme uncertainyu

The Pivot --- SUDDENLY overhyped.

YOU MUST be able to do this.    The successes can do this.
They can find the good ideas from the bad, inside the distortion field.

SPEED WINS.
   how many pivots you have left.

   if we can reduce the time betwwen pivts,
   we can increas the odds of our success.

BUILD -> MEASURE -> LEARN

startup=    turns ideas into code

IF YOU DIDNT BUILD ANYTHING, you cant pivot
IF YOU DIDNT TEST IT WITH CUSTOMERS, you cant pivot

MINIMIZE total time through the loop.  Cycle time dominates.

gnl mgmt is about efficiency, not cycle time.

GOING THROUGH THE LOOP -- thats how you you settle arguments between founders.

How much design -- a reasonable balance.

FIVE principals.

   -- entreps are everywhere
   
         anywhere we seek out uncertainty
	 which is everywhere, given uncertainty from IT rev.

   -- entrp is mgmt
   
   -- validated learning

   -- innovation accounting

        normally just compliance reporting

	but:  drive accountablity -- hold mgrs accountable
	GM = "std volume"  to compute how many cars each division
	is expected to sell.   allows gm to give bonuses.

	NOT good for entreps.

	"success theater"      (cumulative total registrations  Heh.)

	ACTIONABLE METRICS, per customer, NOT VANITY METRICS.

	facebook per-customer behaviors were exciting.  Customers were
	heavily engaged in voluntary exchange with company.  And very
	viral.
	
		NEED an accounting paradigm for entrps to prove
		they've done validated learning.

		so you never take credit for random stuff, but only
		take credit for what you derve it for.

   -- build measure learn


HOW do we know when to pivot?

    as if it were obvious when there's a failure.

    land of the living dead.   

    persevere straight into the ground.

    Right answer: (acocunting)
    	  pattern, like in science, 
	  when the experiments are no longer very productive.
	  If When we can't move the needle very much.

Vision, Strategy, or Product
	- what makes a great company?
	500 Auto companies before Ford!!!
	they didnt have the right process.

	Vision doesnt change.  it's about changing the world.
	Strategy is how to build a business around that.

	product dev == optimization

	pivot is changing strategy, not vision.

	THERE is not testing The Vision.  We're NOT trying to
	elimintate vision.

What should we measure?  How do products grow?

     Entrp. accounting

Are we creating value?

What's in the MVP?

       - should a feature be in or out?  Out.

Can we go faster?

NEW BOOK.

================================================================

lean.st/LeanStartupBos

startuplessonslearned

$$ Thu Jan 27 19:53:19 EST 2011 

How do you keep engineers having faith in the process given MVP.

How to manage engineers under uncertainty?

    1.   Keep them calm.   Heads down, cranking out code.

    [reeks of frd taylor.]

    2.   Enlist all functions in process of discovering if on right track.
    	 
	 ABANDON Reality Distorion Field.

	 People will be way more creative if they know what's going on.

	 The truth will set you free.

================

Q: Newbie: use of the term "movement"

Eric: I dont want other people to to be doing this.

Eric: I used to be a coder.   What do I do now???

there IS something going on worldwide.   this is science, not religion.

lets be careful.

  if it works for entrepreneur, it's part of Lean Startup.

We're learning a lot over the past two years.

The movement is not me -- the movement is you guys testing these ideas, in changing the world.

this is NOT about proprietary advantage.

Eric used to think the right way to change the world was get the VCs to
evangelize.   Sooooo dumb of me.

vc:  "im not that interested in improving the world, just my profolio"

But now, we should do science.  If we all do it, we'll all improve the
world and live in a better world.

Q: how to test ideas people are not searching for.

eg dropbox -- no one knows they want it.

If customers dont know they have the problem and know the name of it,
you have to find a new way.

at imvu, people didnt know it "outbound is the new inbound" we did ad
compaigns, $5/day, buying keywords of every adjacent product, "*" +
chat.  And drove people to our landing page.  We wanted to learning
the differeences in convertion between these channels.

dropbox's MVP was a video, aimed at DIGG users.  Drove people to
waiting list, beta users.
   justin tv  sl conf  video

================

MBAs.

how much do MBAs need to re-learn?

er: I'm doing entreprenur in residence at harvard business school.

but why waste time with MBAs?

"what do people say about us when we're not in the room?"

MBAs have one big advantage:  very process and discipline oriented.

if you dont have some failures, yo dont learn.

you need to be able to tell what change to the product/market caused
the numbers to change.  IT HAS TO BE A VALID SCIENTIFIC EXPERIMENT.

================

Some new stuff:

the right things to measure are clear and consistent across all startups.

 1.  value test -- do you know it creates values

 2.  working enging of growth.

two feedback loops:
   -- eg loop in cylendar engine, and driver-and-surroundings
   write down how to get to work == taylor plan

three engines of growth

  -- paid.   you make a $1 per customer, and they cost $0.50 to buy
      (have to be able to buy customers)

  -- viral.   as a necessary consequence of customer using it, they
    get their friends to use it.  "someone has tagged you in a photo"
    you HAVE to click on that.   even some fashion busineses.  
    they "grow themselves" bye xploiting bug in human naturo

   -- sticky, engagment.  addictive, network effects, lockin, ebay,
      wotw, compounding interest.  so small viral can compound it, if
      sticky enough.

================

easily replicated product, get to market first?

fear: someone will steal my idea.

So: take your second best idea, and try to get someone to steal it.
TRY to get them to steal your idea.    PEOPLE dont steal ideas.

IT SEEMS NUTS, BUT ITS TRUE.

You need a good idea.

Threat by big company to clone you -- they poached a co-founder --
came out with exacty product two years delayed.  $100m failure for
them.

FIRST mover advantage is very rare in reality.     (!!!)

================

one person from each company.   how to get whole company to buy in?

people often say: that's a really great idea for someone else to do.

the issue is the WORK is a system.  Your company is a perfect robust
system, stable.  very very hard to change -- must be planned
carefully.

try to find one area where there is painful uncertainty, and say there
is a community of people trying science to solve this problem.

every nods at maximize speed through the loop.

BUT if you do that, you will MAKE PEOPLE FEEL INEFFICNENT.  People
will be interrrupted to do things "not their job".  There will be a
team powwow where people say they hate it making them less efficient.
NEED people bought into theory -- understanfing the value of VALIDATED
LEARNING.  Only do this where you have authority, maybe just yourself.

================

How does Lean Startup affect managment & sales force.  Lab Equipment
company. --- "sales people are whiners" Really, customers were giving
great feedback, and non of that was making it back to management.

4 steps to ephinary -- steve blanks book -- perfect on enterprise sales.

YOU CANNOT DELEGATE customer development.  founders and senior mgmt
have to be in the room with the customers, at least some of the time.

Salespeople arent supposed to be good listeners.

Mgrs should DO THE SALES THEMSELVES.  THe goal is not to make money,
it's to get validated learning....

ONCE we understand how to do the sales, THEN give it to the salesfolk,
as per Steve Blank.

if using sales force, you are doing Paid Engine Of Growth.

================

Q: I have a product that people havnts paid for.  mainstream product.
personal keepsake.  needs to look really good.  hard to measure.
style counts.  what's mvp in that scenario.

why cant you do a landing page.

  keepsake book.

goal of MVP --- least amount of work to learn what needs to be
learned.  such as whether customers will pay.   eg get pre-orders.

you can always test demand through fake landing pages.

"Concierge" from food-on-the-table.  did it all by hand, until they
figured out what folks REALLY wanted.

PEOPLE will not truthfully answer what they would do.  "Would you buy
this" turns out to be TERRIBLE DATA.

**** YOU ALWAYS CAN TEST


requires VERY difficult risking rejections.     CUSTOMER SERVICE HURTS!!!

* Eric *HATEs* customer feedback *

does he really want to know what you thought about today's talk?  No!!!

================

When collective feedback, its NOT ABOUT YOU, it's about the person
giving it to you.

"product is okay" means "product sucks but I'm polite"

================


very very hard.      but rewards are emense.

think about all the people utterly wasting their time.

let's redeem them;  make it happen!.

$$ Thu Jan 27 20:35:27 EST 2011 )))

Simplified RDF

November 10, 2010

I propose that we designate a certain subset of the RDF model as “Simplified RDF” and standardize a method of encoding full RDF in Simplified RDF. The subset I have in mind is exactly the subset used by Facebook’s Open Graph Protocol (OGP), and my proposed encoding technique is relatively straightforward.

I’ve been mulling over this approach for a few months, and I’m fairly confident it will work, but I don’t claim to have all the details perfect yet. Comments and discussion are quite welcome, on this posting or on the semantic-web@w3.org mailing list. This discussion, I’m afraid, is going to be heavily steeped in RDF tech; simplified RDF will be useful for people who don’t know all the details of RDF, but this discussion probably wont be.

My motivation comes from several directions, including OGP. With OGP, Facebook has motivated a huge number of Web sites to add RDFa markup to their pages. But the RDF they’ve added is quite constrained, and is not practically interoperable with the rest of the Semantic Web, because it uses simplified RDF. One could argue that Facebook made a mistake here, that they should be requiring full “normal” RDF, but my feeling is their engineering decisions were correct, that this extreme degree of simplification is necessary to get any reasonable uptake.

I also think simplified RDF will play well with JSON developers. JRON is pretty simple, but simplified RDF would allow it to be simpler still. Or, rather, it would mean folks using JRON could limit themselves to an even smaller number of “easy steps” (about three, depending on how open design issues are resolved).

Cutting Out All The Confusing Stuff

Simplified RDF makes the following radical restrictions to the RDF model and to deployment practice:

  1. The subject URIs are always web page addresses. The content-negotiation hack for “hash” URIs and the 303-see-other hack for “slash” URIs are both avoided.

    (Open issue: are html fragment URIs okay? Not in OGP, but I think it will be okay and useful.)

  2. The values of the properties (the “object” components of the RDF triples) are always strings. No datatype information is provided in the data, and object references are done by just putting the object URI into the string, instead of making it a normal URI-label node.

    (Open issue: what about language tags? I think RDFa will provide this for free in OGP, if the html has a language tag.)

    (Open issue: what about multi-valued (repeated) properties? Are they just repeated, or are the multiple values packing into the string, perhaps? OGP has multiple administrators listed as “USER_ID1,USER_ID2”. JSON lists are another factor here.)

At first inspection this reduction appears to remove so much from RDF as to make it essentally useless. Our beloved RDF has been blown into a hundred pieces and scattered to the wind. It turns out, however, it still has enough enough magic to reassemble itself (with a little help from its friends, http and rdfs).

This image may give a feeling for the relationship of full RDF and simplified RDF:

Reassembling Full RDF

The basic idea is that given some metadata (mostly: the schema), we can construct a new set of triples in full RDF which convey what the simplified RDF intended. The new set will be distinguished by using different predicates, and the predicates are related by schema information available by dereferencing the predicate URI. The specific relations used, and other schema information, allows us to unambiguously perform the conversion.

For example, og:title is intended to convey the same basic notion as rdfs:label. They are not the same property, though, because og:title is applied to a page about the thing which is being labeled, rather than the thing itself. So rather than saying they are related by owl:equivalentProperty, we say:

  og:title srdf:twin rdfs:label.

This ties to them together, saying they are “parallel” or “convertable”, and allowing us to use other information in the schema(s) for og:title and rdfs:label to enable conversion.

The conversion goes something like this:

  1. The subject URLs should usually be taken as pages whose foaf:primaryTopic is the real subject. (Expressing the XFN microformat in RDF provides a gentle introduction to this kind of idea.) That real subject can be identified with a blank node or with a constructed URI using a “thing described by” service such as t-d-b.org. A little more work is needed on how to make such services efficient, but I think the concept is proven. I’d expect facebook to want to run such a service.

    In some cases, the subject URL really does identify the intended subject, such as when the triple is giving the license information for the web page itself. These cases can be distinguished in the schema by indicating the simplified RDF property is an IndirectProperty or MetadataProperty.

  2. The object (value) can be reconstructed by looking at the range of the full-RDF twin. For example, given that something has an og:latitude of “37.416343”, og:latitude and example:latitude are twins, and example:latitude has a range of xs:decimal, we can conclude the thing has an example:latitude of “37.416343”^^xs:decimal.

    Similarly, the Simplified RDF technique of puting URIs in strings for the object can be undone by know the twin is an ObjectProperty, or has some non-Literal range.

    I believe language tagging could also be wrapped into the predicate (like comment_fr, comment_en, comment_jp, etc) if that kind of thing turns out to be necessary, using an OWL 2 range restrictions on the rdf:langRange facet.

  3. Next Steps

    So, that’s a rough sketch, and I need to wrap this up. If you’re at ISWC, I’ll be giving a 2 minute lightning talk about this at lunch later today. But if you’ve ready this far, the talk wont say say anything you don’t already know.

    FWIW, I believe this is implementable in RIF Core, which would mean data consumers which do RIF Core processing could get this functionality automatically. But since we don’t have any data consumer libraries which do that yet, it’s probably easiest to implement this with normal code for now.

    I think this is a fairly urgent topic because of the adoption curve (and energy) on OGP, and because it might possibly inform the design of a standand JSON serialization for RDF, which I’m expecting W3C to work on very soon.

Why Decentralize Facebook

October 17, 2010

Last week, I saw The Social Network. I enjoyed it as a movie (like everyone else, it seems), but it also made me unhappy, because it reminded me what a misdirected force Facebook is in danger of becoming (or already is). As most people realize, Facebook centralizes too much power; unless it changes course, this will be its undoing. I hear cheers from some in the audience, but it’s the users who will suffer along the way.

I’ll start with a quote from writer Jessi Hempel, reviewing the movie from the perspective of someone who claims to knows Mark Zuckerberg personally:

The real-life Zuckerberg was maniacally focused on building a web site that could potentially connect everyone on the planet. As early as 2005, he told me, “It’s a social utility and what makes it work will be ubiquity.” [Fortune]

To a first approximation, that’s the same as my goal of many years: building a system to connect everyone on the planet. But I don’t think it can possibly work if it’s a centralized system, with one organization controlling it to any substantial degree. Facebook may have 500m users, but it’s not going to get to 5b users until it’s a truly decentralized, open platform like the Internet and the Web.

More importantly, it wont get to the point where we can, in good conscience, require or assume our fellow travellers on this planet use it, as we generally can with email, the Web, and the telephone network. Some communities (eg schools) are requiring people use Facebook, and I’m not the only one who finds that scary and offensive.

Of course, Mark Zuckerberg is a smart guy. Wired reports him saying:

I don’t think the world is going to evolve in a way that there is just one big site. I think it is going to be that there are going to be a lot of really great services and we are helping to get it there. I think people are always a little skeptical when something grows to something big, but I think you need to look at what it is doing.

And he’s not the only one. When Google made their first attempt to replace email with Wave, they knew it would have to be decentralized, with them just being one of many equal hubs.

When I was younger, I loved decentralization because it got us out from under the control of authorities I didn’t respect. I think that particular fire may have gone out for me, but I still see the need: if we’re going to build the kind of universally shared apps the planet needs (and Facebook dreams of), they have to be built on an open, decentralized platform. Otherwise there is no way they’ll be able to reach even as far as the Web does now.

In a perfect world, I would now sketch out how to build a decentralized version of Facebook. But I seem to have too much else to do right now. So, at very least, that will have to wait for another day.

I can say that it would be built using linked data. I came to linked data as a good way to build global scale shared/social apps, and I still think it’s the best approach. There are some more details to work out, though. Sadly, I haven’t come across any promising funding or business models to support that work. Decentralized businesses don’t have market lock-in and $100m+ exits.

It may be Diaspora will do it. I’m confident before they get very far they’ll have to re-invent or adopt RDF, and eventually the rest of the SemWeb stack. I haven’t yet looked at their design. It may also be Facebook itself will do it. (The fact that Zuckerberg still controls the company, instead of investors, makes it somewhat more likely.)

I suppose, after saying all this, it’s on me to show how SemWeb technology actually helps. Or is that obvious?

Edited to Add: I got a private question about my claim that facebook can’t scale to 5b users, so let me expand on that a little. I see two things stopping them:

  1. A branding, and look-and-feel problem. Some people hate facebook, without even knowing why. Some people find the site awkward and difficult. This is going to be true of any site; I think the only way around this is to provide for multiple brands with multiple user interfaces. In theory, facebook could do this themselves, much like car manufacturers have multiple “makes”: Cadillac and Chevy are just product lines from the same company, but people’s feelings are directly mostly at the product line.
  2. A trust issue. Some communities (including some governments) will, quite rightly, refuse to trust facebook to operate in the way they want. It’s possible facebook can find a way to address this concern as well, with special contracts, and even special data centers. For instance, it wouldn’t be impossible for them to build a facebook cluster for CIA internal use, in a CIA facility, subject to full CIA controls, but still somewhat interoperable with facebook at large. But I wouldn’t hold my breath waiting for that to happen, either. I’m not sure how it will play out when teachers ask their students, and their parents, to use facebook.

So, that’s not an ironclad argument that they can’t grow to 5b, but that’s what I’m thinking.

Explaining Linked Data

June 8, 2010

I’m going to try explaining linked data again, tonight, at the Cambridge Semantic Web Gathering. I will attempt to keep it simple, while still covering the important details. We’ll see how it goes.

My slides, for people who want a peek ahead of time, are here.

ETA: Thanks to Marco Neumann, there’s a video of my talk. I’m pretty happy with it. Also, my Venn Diagram slide, which you’re welcome to re-use with credit.

Sometimes, if you stand in the right place and squint, JSON and RDF line up perfectly. Each time I notice this, I badly want a way to make them line up all the time, no matter where you’re standing. And, actually, I think it’s pretty easy.

I’ve seen a few proposals for how to work with RDF data in JSON, but the ones I’ve seen put too much burden on JSON folks to accomodate RDF. It seems to me we can let JSON keep doing what it does so well, and meanwhile, we can provide bits of RDF which can be adopted when needed. Instead of pushing RDF on people, allow them to take the parts they find useful.

In thinking about it, I’ve come up with six things RDF can do that are not standard parts of JSON. These are things one can do with JSON, of course, but not in any standard way. My suggestion is these bits of functionally be provided in an RDF-compatible way (as I detail below), so that the JSON world and the RDF world can start to really play well together.

I’m interested to hear what people think of this. Blog comment, email to sandro@hawke.org (maybe cc semantic-web@w3.org?), or catch me in the halls at SemTech. I expect this general topic of RDF-meets-JSON will be discussed at the RDF Next Steps workshop, and if the stars line up right, maybe we can get a W3C Recommendation in this space in the next year or so. Let’s call this particular proposal JRON 0.1 (Javascript RDF Object Notation), not “Sandro’s Proposal”, so I can be freer to like other designs and be properly neutral.

Step 0: Start with ordinary JSON

In general, JSON and RDF are very similar, although they are usually described using different terminology. Of course, they both have strings and numbers. They both have way of encoding a sequence of items: arrays in JSON, lists in RDF (some details below). The main structuring is around key-value pairs, which JSON calls an ‘object’. In RDF we call it the “subject” and focus on its connection with each key-value pair; the three together form an RDF triple.

The point here is that ordinary JSON structures correspond to an important subset of RDF. The don’t exactly match that subset because RDF uses namespace, as detailed in step 5 below. The other steps below show the ways in which JSON is a subset of RDF. If one takes all the steps here, using JSON with these conventions, one has full RDF.

So, here are the steps. Steps 1-3 are pretty simple and not very interesting. They address everyday concerns in data processing. Steps 4-6 may be a little more surprising if you’re not familiar with RDF.

Step 1: Allow Extended Datatypes

Why: For datatypes, JSON only has strings, numbers, booleans. Sometimes people want to store and manipulate other datatypes, such as dates, or application-specific datatypes.

How: RDF uses XML’s datatype mechanism, where data values are conveyed as a pair of items: a lexical representation (a sequence of characters) and a datatype identifier (a sequences of characters which happens to be a URI). Each datatype is a mapping from strings (lexical representations) to values; the datatype identifier tells us which datatype is to be used to interpret this particular representation.

In JRON, we represent this pair like this:

{ "__repr": "2010-03-06",
  "__type": "http://www.w3.org/2001/XMLSchema#date" }

You can put this as a value in a list or in a key-value pair, just like a string or number.

RDF doesn’t restrict which datatypes are used. Some recent standards work selected this list as the set people should implement.

Personally, I’m not sure users need to be able to extend datatypes. I see dates being important, but otherwise I’m not convinced. Still, it’s in RDF, and I like compatibility, so it’s here.

Step 2: Allow Language Tags

Why: When you have text available in several different languages, language tags provide a way to select which of the available strings, if any, matches the language preference of the user.

Also: Text-to-speech systems can handle text better if they know which natural language to use in pronouncing the text.

How: RDF allows language tags on string literals. In JRON, we use a pair like this:

{ "__text": "chat",
  "__lang": "fr" }

Commentary: Personally, I’ve never liked this bit of RDF. I feel like there are better architectures for handling language tagging. But there was a vocal community that felt this was essential, so it’s in the standard. I gather some people like it, and I haven’t seen a good counter-proposal.

Step 3: Allow Non-Tree Structures

Why: Sometimes your data is not tree structured. Sometimes you have an arbitrary directed graph, such as when representing a social network.

How: In RDF, an arbitrary “node id” is available for making non-tree structures. We can do the same in JRON, saying any object may have a node id, and if it does, the object is considered the same as all other objects with the same node id. Like this bit JSON saying my friend Eric and I both know each other:

...
   { "foaf_name": "Sandro Hawke",
     "foaf_knows: { "__node_id": "n102" },
     "__node_id": "n334" }
...
   { "foaf_name": "Eric Prud'hommeaux",
     "foaf_knows: { "__node_id": "n334" },
     "__node_id": "n102" }
...

In the above example, the objects representing me and Eric are given node ids, and then those node ids are used to make the links to each other. We could also do this with only one node id, but we still need at least one:

   { "foaf_name": "Sandro Hawke",
     "foaf_knows: { "foaf_name": "Eric Prud'hommeaux",
                    "foaf_knows: { "__node_id": "n334" },
     "__node_id": "n334" }

Okay, those were the ordinary three things to add to JSON. Here are the interesting three:

Step 4: Allow Cross-Document Structures

Why: Sometimes, there is useful, relevant data available on the Web but it’s not part of the current JSON document. We would not want all the Web pages in the world to be gathered into one big Web page; similarly, it’s good to keep data in different documents. But that shouldn’t stop us from easily combining the data, and keeping the links intact.

How: RDF allows IRIs (unicode web addresses) to be used as node identifiers. They are like node ids, except they work across multiple documents; they are globally unambiguous identifiers, and systems can use Web protocols to dereference them to get other useful information.

In JSON, we can do this:

     { "foaf_name": "Sandro Hawke",
       "__iri": "http://www.w3.org/People/Sandro/data#Sandro_Hawke"
     }

Commentary: So why do we still need __node_id? Because sometimes it’s a pain to make up a good IRI. Some people prefer to always use IRIs, avoiding node_ids in their data, and that’s fine.

Step 5: Put Keys in Namespaces

Why: When data is coming from different sources across the Web, it’s not practical to get all the sources to agree on all the terminology. Instead, by using Web addresses (URLs/IRIs) as our keys, we allow individuals and organizations to make their own decisions. They can decide how much to share their vocabularies, and they avoid accidental name collisions. The web address also provides a handy link to documentation, community sites, schemas, etc.

How: It’s awkward to use a whole, long http IRIs everywhere, so as in many RDF syntaxes, JRON has a prefix expansion mechanism, like this:

    { "foaf_name": "Sandro Hawke",
       ...
      "__prefixes": {
         "foaf_" : "http://xmlns.com/foaf/0.1/"
      }
    }

Here the key “foaf_name” gets expanded into “http://xmlns.com/foaf/0.1/name”, which serves as a unique-on-the-Internet identifier for a particular conceptualization of names.

Commentary: Although I’ve left it almost to the end, this is the one mandatory part of this proposal. All the other elements are only present when required by the data. The null JRON document is: {“__prefixes”:{}}

Others have suggested this part can be optional, too, by having a set of standard prefixes for a given API. I’m not entirely opposed to that, but I’m concerned about how those defaults would be communicated in practice.

Also, I’m not sure there’s consensus on what character to use in the short name: should it be foaf_name, foaf.name, foaf:name, or what? The mechanism here is that you can use whatever you want: the __prefixes table keys are matched longest-first. If there’s an entry with an empty string, that provides a default namespace.

Step 6: Allow Multiple Values Per Key

Why: Sometimes it makes sense to have more than one value for some property. For instance, as it turns out, I have more than one friend. I could use a single-value ‘list-of-friends’ property, but sometimes it makes more sense to use a ‘friend’ property that has multiple values. In particular, if we’ll be learning who my friends are from multiple sources, and we were using lists, what order would we put the resulting combined list in?

How: We still just use JSON lists, but we indicate that the order does not matter, so the values can be merged arbitrarily:

   { "foaf.name": "Sandro Hawke",
     "foaf.knows: { "__values": [
                     { "foaf.name": "Eric Prud'hommeaux" },
                     { "foaf.name": "Dan Brickley" },
                     { "foaf.name": "Matt Womer" }
                  ]}
   }
   

Closing Thoughts

That’s it. Those are the six things that RDF does that normal JSONdoesn’t do. Did I miss something?

The API I’m imagining (but haven’t built yet) would have a few
features like:

jron_reprefix(tree, desired_prefixes)
Returns another JRON tree with all the prefixes matching the ones provided here. If you’re going to use foaf, for instance, you probably want to set a prefix like “foaf.” for foaf, so your code can expect it.
jron_merge_nodes(tree) and jron_treeify(tree)
convert a tree (suitable for transmitting) from/to a graph (suitable for use in memory
jron_use_native_type(tree)
Would convert all the __type/__repr objects into suitable local objects, if they exist. Maybe even date/time objects, if there’s a suitable library installed for those.

One technical issue for RDF folks:

Should JSON arrays be considered RDF Lists or RDF Sequences? Perhaps they default to RDF Lists but there can be an option flag in the top-level object:

 { ...
   "__json_array_is_rdf_seq": true 
   ...
 }

When that flag is absent or false, arrays would be considered RDF Lists. My sense is no one needs to use both. Maybe soon we’ll know if RDF Sequences can finally be deprecated.

I’m just past halfway through nine consecutive days of all day meetings, clustered around the W3C’s semi-annual Advisory Committee meeting. When this series started, I didn’t have any particular responsibilities, beyond attending. But then John Sheridan had to canceled his trip, and I was tapped to stand in for him on panels at both the AC meeting (this morning) and FOSE tomorrow.

My slides don’t stand on their own very well, but here they are anyway. There might be video of the FOSE talk, later. And if you’re in Washington, by all means, drop by!

1. Why linked data is important for government open data initiatives. This was for a panel showing how Linked Data is being adopted, in various ways, in various industries, for an audience (W3C staff and member representatives) who were, perhaps, tired of hearing about the promise of the Semantic Web.

2. How To Do Open Data (in 10 minutes). This is for an audience of federal IT managers/vendors who are mostly unfamiliar with the concept of RDF.

RDF meets NoSQL

March 9, 2010

On Thursday, I have 20 minutes to address 200 people (plus a video audience) at NoSQL Live … from Boston. My self-appointed mission is to start building bridges between the NoSQL community and the Linked Data/RDF/W3C community. These are two sets of people working on different problems, but it’s pretty clear to me they are heading in the same direction, in similar spirit, and could gain a lot from working together.

I’m organizing my talk around the question of standardization for NoSQL, and I’ll talk about W3C process and such, but the interesting part is where NoSQL touches RDF. So here are some of my thoughts on that, written for an audience already familiar with RDF. I’d love to get some feedback on the basic ideas now, to make my talk better.

  1. While they both want to move beyond SQL, their reasons are different. I understand the key for NoSQL is:

    • SQL doesn’t scale big enough. Once your read-write dataset gets too big for a single machine, you have to develop and maintain a messy sharding system. (But see below.) Sometimes it’s an economy/efficiency thing; if you don’t need ACID, a NoSQL solution might give you better performance on cheaper hardware.

    Meanwhile, I see two big motivation for the RDF folks:

    • Decentralization. On the open Web, data comes from many different sources, mostly beyond your control. Everyone might be providing data to everyone, and no one has the authority to run a central database, ever if they had the technology and the iron.

    • Inference. Some people find formal semantics and well defined inference to be very useful. (How is this different from triggers and views? Good questions.)

  2. There’s a freedom, a joy, and in some case enormous practicality to not having to pre-create your database schema. Nearly all NoSQL and RDF systems are schemaless or allow a fully dynamic schema. (But I understand most bigtable systems have fixed column families. That could be a problem in using a bigtable as an RDF store.)

  3. Both tend follow Web Architecture, using HTTP and often using REST.

  4. Reading about CouchDB and MongoDB (JSON document databases), as well as Neo4j (a graph database), I noticed an undercurrent about SQL being awkward to program against. I guess this is the O/R Impedance Mismatch, especially the structural differences. RDF’s design is very close to the relational model, so it doesn’t help on this front. Within the RDF community, however, there are some systems which attempt to partly bridge this gap, including node-centric APIs (which I happen to prefer, myself). I would also argue that duck typing closes the gap from the programming-language side.

  5. I don’t see NoSQL going anywhere near linked data or having a vocabulary ecosystem. I expect it will want to, someday. Decentralization is a key difference in requirements.

  6. I only see RDF dealing with inference. Why is this? My first thought is that the RDF community has a lot of AI roots, and the NoSQL community doesn’t. But maybe it’s about economics and motivations: formalizing the notion of inference makes it possible, in theory, to easily deploy very sophisticated data transformation (and making them complete before the sun goes out). At NoSQL scale, folks are much more concerned about techniques for being able to run even simple transformations (and making them complete before the power bill comes due). I note that AllegroGraph manages to be in both communities, with a very practical, high-performance Prolog element; I don’t know how parallel it is. A few RDF folks are working on using map/reduce. Presumably, with the rise of multi-core systems, even single-user inference engines will want to be made parallel.

  7. How many of the reasons for NoSQL rejecting SQL also apply to SPARQL? Does the scaling issue apply? Actually, does the scaling issue really apply to SQL? Michael Stonebraker (more or less the Voice of God) claims automatic sharding can and should be done while still using SQL. Some people reply: Perhaps, but in Enterprise-Grade Open Source? Also, maybe “SQL” is a euphemism for ACID, and that’s really what doesn’t scale and/or is too expensive. Perhaps that issue needs to be settled before considering SPARQL? Actually, the state of ACID in SPARQL is an open issue right now; maybe NoSQL can inform that decision.

  8. RDF is standardized. Some would argue it’s more standardized than SQL; that case will be stronger when SPARQL 1.1 is done. Here’s a diagram Steve Harris made, which I reformatted. “KV” refers to key-value stores, the simplest, most scalable kind of NoSQL database.

    diagram showing sparql being more standard the sql which is more standard than key-valye stores, while kv and sparql scale more then sparql.

So… What does that all boil down to?

Bottom line: RDF could learn a lot from NoSQL about scaling and ease-of-programming; NoSQL could learn a lot from RDF about decentralization and inference.

Some closing questions, ideas….

Can someone make a SPARQL endpoint with Cassandra’s performance and scaling properties? I haven’t studied this idea much, but I’m afraid the static column families will make it impossible to get much performance without building the store for a particular set of SPARQL queries. But it could still be useful, even with that drawback. Or maybe some SPARQL endpoint is already there; has anyone really tried a comparative benchmark?

How does the SPARQL endpoint description and aggregation work compare to database sharding. Are there designs for doing it automatically?