August 23, 2009
Here are some things I want to do:
- Put my pictures of my latest family vacation on the web, where my friends and family can see them. Let my mother put her pictures of the same events there, too. Let me select some for my friends to see, and her select some for her friends to see. Know that the pictures will be there, safe, for as long as anyone in the family wants them to be.
- Put my movie collection on the web, so I can watch my movies wherever I happen to be, and so can my kids. If one of them is at a friend’s house, why should it matter if she remembered to bring the DVD, if the house has a good network connection?
- Borrow my friends’ movies and music from time to time. If I want to watch some old college favorite (let’s say Heathers), and I know my friend Keith has a copy, must I drive 20 minutes to his house to borrow his DVD? No, I want him to have it on his server, and let me borrow it. (Note that I’m not asking to make my own copy. What I want might or might not break some laws, but it seems to me that if I can legally borrow his physical DVD for a few days, I should also be able to legally borrow the bits on his DVD. As long as there’s still only one working copy at any point in time, I expect the MPAA wont be quite so furious with me.
- I want this copy-protection to be only advisory; if Greg borrows my copy, and somehow loses it or it breaks, I can just reclaim it. If he finds it again, his copy is an illegal one. There’s no need for security here (someone would just crack it anyway); I just want to make it easy for people who want to follow the rules to do so, and I hope the system would usually make it easier to follow the rules than to break them.
- I want this to be private. I’m talking about sharing with my friends here, not sharing with the world. I may not want the world looking through the family vacation pictures, and I don’t want strangers borrowing my movies if it’s going to prevent me from watching them. I certainly don’t want strangers copying movies I paid for if it’s going to make the MPAA apoplectic (as I’m sure it would).
I know: youtube, myspace, flickr and their less-popular but more-exciting rivals offer something like this. But I want it decentralyzed. When I say private, I mean private from everyone, including Google and News Corp and Yahoo/Microsoft. And when I say I want the pictures to be there, “safe”, more-or-less forever, I don’t just mean until some corporation decides to change its terms of service, or capriciously enforce some clause I never noticed.
I think we can do this.
The heart of any system like this is the servers, of course. I suggest that a few people in each cluster of friends would be willing to pay a few dollars a month for some managed web space. Imagine it as a locker, and they tell their friends the combination. Or a private island, and they give their friends the right to land, in their private jets. (Virtual private jets are free in this modern world.) It’s a web location where they and their friends can store their digital media, where it’s easy to access. (When users access these storage sites, I picture them generally not caring about location; I look for “Healthers” and it finds all the copies all my friends are willing to share — no need to specifically ask Keith. With this, redundancy should be convenient and encouraged.)
I picture the server software being pretty simple (web+db), and something any web hosting provider could offer with one-click installation. Like wordpress, etc, but even simpler. The point is that if you want your own locker/island, you could rent one for $5/mo (low end) to $50/mo (high end), from commodity providers.
The data would be stored encrypted, so the host computers and hosting services never get to see what bits are really stored. They can’t see the family pictures or hollywood movies, even if they want to, no matter their terms of service or pressure from the government.
(These secure network volumes already exist, in various forms, most famously Amazon S3. There’s no magic here, but we might want a few tweaks to the API, and there’s definitely a need to package it differently so that my mother can comfortable sign up for hers. Also, I think this would support a market for cheaper and less-reliable storage, since reliability could be provided by other means.)
The clients would be more than just browsers. They have to understand the media tagging and indexing, and handle the crypto. They’d probably offer discussion boards, and such, as part of the media metadata.
I’m imagining three different kinds of clients:
- A purely web-based client would allow users an experience very much like flickr and youtube. It could show ads, and it would know your encryption keys, but at least it wouldn’t be hosting the media data, so you could move from one to another without risk of losing your photos, music, and movies.
- A plug-in for a media player like VLC.
- A command-line, virtual-file-system client. This is the kind of thing I might hack together as a prototype, using fuse. All the lockers to which I have keys would appear as a big filesystem, and there would be some tools for searching and manipulating it. I could use my existing media players, unmodified.
That’s the gist of it. I have lots more details floating around in my
head. Does it exist already? Shall I go ahead and make a prototype
(in my copious free after-hours time)?
July 9, 2009
RIF is done, more or less.
When I say “done”, I don’t mean “done” like toast in the toaster is done, when it’s just perfectly crunchy, without quite being dry or hard. And I don’t mean “done” like a hacking project which is done at that precise moment when it stops being more fascinating than sleep or food or sunshine. No, RIF is “done” like a term paper, the night before it’s due. It meets the requirements, more or less, and the time has come to ship it.
Of course, the W3C process favors quality over speed, so instead of turning it in and walking away, we’ll have to do several rewrites, to address the teacher’s comments. In this case, we have at least three rounds of that. In the first round (called “last call”, which started last friday), the “teacher” is anyone who feels like reading and commenting. Then comes “candidate recommendation”, when we try to get everyone to implement it and give us comments as they do. (This is where OWL 2 is now). Finally, we’ll ask for a high level review from all W3C member organizations, as they decide whether to promote it from Proposed Recommendation to a full W3C Recommendation.
But still, it’s done like that term paper. It’s turned in, and now we wait for the review comments.
So what is RIF good for, anyway?
The consensus, Working Group answer is 26 pages long and rather in need of some polish, so here’s my short answer. Here’s why I’ve spent the last five years working on it. (No need to cry for my lost youth, I did some other fun things during that time, too.)
We need RIF so that we don’t need standards any more.
If you’ve ever tried to use FOAF (arguably the most popular Semantic Web vocabulary), you may have noticed a little problem with representing names. Am I:
- [ foaf:firstName "Sandro"; foaf:surname "Hawke"], or
- [ foaf:givenname "Sandro"; foaf:family_name "Hawke" ], or just plain
- [ foaf:name "Sandro Hawke" ] ?
Who knows? How can anyone decide? It’s a mess.
And, of course, this problem is repeated everywhere. Every ontology has its share of coin-flip design decision — decisions where you have no overwhelming engineering reason to make one choice over another. And every problem space has, or will soon have, a vast array of ontologies addressing it from many slightly-different angles.
I want data providers to publish using whatever ontology they know and love.
I want data consumers to consume (use) data in whatever ontology they know and love.
I expect RIF to be the glue in the middle, behind the scenes, in a fuzzy ball of linked-data rule-engine goodness.
Imagine Jos publishes using foaf:firstName and foaf:surname. Imagine Chris publishes using foaf:givenname and foaf:family_name. Imagine Gary writes an app which looks for foaf:name data. As long as the right RIF rules are present on the Web, in the right places, this should work. People using Gary’s app should see the data from Jos and the data from Chris, even though Gary never knew or cared about the vocabularies they used.
Of course, there’s some question as to what those rules should say. In the US, the givenname and the firstName can be treated as the same. Meanwhile, in Japan, the family_name is the firstName! And if you try to split a name back into firstName and surname, do you use the last space (as in “Sarah Jessica Parker”) or the first space (as in “Hillary Rodham Clinton”)?
I think the solution is to accept that there may be multiple rulesets, suitable for different purposes, and they may not be perfect. I explored this space to some degree in a different context: XTAN associates some “impact” (which might be called “semantic damage”) with each transformation (or ruleset). I think that can work.
So, there are still some details to work out. I’m not presenting the solution here; I’m just explaining why RIF interests me. Now you know.
(And no, I don’t really think this will entirely obviate the need for standards, but I think it will significantly reduce that need, taking some pressure off, and shifting some more work to the machines.)
June 19, 2009
I’m at SFO with three hours to kill, and not many brain cells left, after spending the week at the Semantic Technologies Conference. I am, it turns out, so short on functioning brain cells that I’m writing a blog post, after all these months of self-censoring because I didn’t have anything worth saying here.
The dominant question at SemTech was whether we’ve finally reached the point in the adoption curve where (to mix 2d curve metaphors) it’s all downhill from here. Has this thing really caught on? Can we just coast and let the Semantic Web take over the world now?
I still think there are some vital pieces of the architecture missing, but I can’t deny that more and more people seem to “get it”, and be spreading the word. People I’ve never even heard of. That’s a pretty good indicator.
Of course, maybe they don’t really get it. I’m not quite sure anyone really gets it, since they don’t seem to notice those missing pieces.
I was happy to see Evren Sirin talking about their work at Clark and Parsia on using OWL for integrity constraints. I’m not sure they’ve got it quite right — it’s hard to tell — but I’m really glad they are trying.
The divisions within the community are still great. You have the rules folks, who really don’t get this whole Description Logic thing. You have the Description Logic folks who usually try to not be too condescending to the rules folks. We have the natural language folks, who I still mostly ignore.
RIF is pretty badly misunderstood and mis-characterized. That might not be my fault, but it’s kind of my responsibility. As one step, I put a little more time into starting a RIF FAQ this afternoon. Feel free to send me more questions that I don’t know how to answer.
Looking over my notes…
Peter Rip had some words of wisdom for startup founders that I think may apply much more broadly. He said to predict your own behavior, then live up to it. That’s what gives people confidence in you. So step one is know thyself, eh? I’m working on it, I’m working on it. One of the things I think I’ve learned about myself is that I don’t want to found a startup, despite all the fantasies about it, ever since ninth grade. What’s changed, I think, is that I’m being more realistic about what it would do to my life, what it would cost. (I suppose at some level, I’ve always known that, since I never actually did start a company with employees.)
Allegro Graph 4.0 filled me with triplestore lust. I don’t know if it could live up to the impression Jans painted of it, but … I want one. Interestingly, I didn’t feel envy; perhaps I’m also ready to give up on the fantasies of writing a killer triplestore.
In the Semantic Search session, David Booth and someone I don’t know separately expressed the concern that computers are getting too smart for our own good. Not really in the Skynet sense, but in the sense that when google tweaks their algorithms, in ways even they don’t understand, or the web just shifts a little, suddenly you can’t find some page any more. I thought Peter Norvig looked apologetic, when he responded with a terse “use plus as a workaround!”. I need to remember that more, when I get frustrated with Google not really doing keyword searching. Conclusion: make sure the compu-smarts always have an off-switch, and that humans never forget where the off-switch is.
I’m a big fan of Tom Gruber, but I’d short sell Siri if it were publically traded. I’m not sure I can put my finger on it. All the arguments he makes for agent computing are compelling, but my gut says thumbs down. Could it be because there is (as far as I know) not a drop of SemWeb technology in there? I don’t think that’s it. I think it’s that I can’t imagine it will actually work better than more conventional alternatives. People don’t want an assistant; they want tools that are simple yet powerful enough for them to complete the task themselves. It’s a bit like GUI vs command-line. But who knows… (I wonder how this bunch of folks from SRI decided to name their new company and product SIRI. Odd….)
Data Portability. Wow, is this a difficult space. I am not optimistic here, either. I think we have many years of stumbling around in our future on this one. And that’s even if facebook isn’t being evil. On the plus side, the community is starting to realize what a hard problem it is. I’ve heard that admitting you have a problem is a good place to start. Still, the meme that there exists a quick solution, if we just get a few smart people together, … it’s damn compelling.
And here we are, going backward in time, back to the first session I attended, Monday morning, from some freebase folks (Jamie Taylor and Colin Evans). I should play with freebase more. If I ever get back to playing with scripts to manage my movie collection, I should probably use their movie data instead of IMDB. Jamie kept saying great things about W3C; I wanted to ask him why they don’t just join, but though I saw him many more times at the conference, he was always walking by in a hurry.
It’s not like I could have made much of a case, anyway. One of the problems with the W3C’s current business model is that folks no longer join W3C just because they (a) use our stuff, and (b) think we’re cool. They actually want business value in return for their dues! Losers.
(How do you measure the business value of a standard existing, anyway? In most cases, it’s a commons problem, where lots of folks get enormous value from the standard, but we don’t know how to monetize that. We can only charge for being part of the conversation when the standard is drafted, and sometimes that doesn’t seem to be worth very much.)
Speaking of W3C, OWL 2 went to CR last week. I think that was my most challenging round of publications yet. Do I say that every time? Still, stepping in as editor of rdf:PlainLiteral, and doing the whole transition process, in the midst of a getting a new manager, … it was challenging.
I’m not actually worried about the CR phase itself. OWL 2 looks pretty darn good. Until SemTech, I was a bit worried about us getting RL implementations, but several people mentioned they planned to try it, and after a while I realized it’s kind of a no-brainer. If you play in the SemWeb space and have a rule engine (as many folks do), why wouldn’t you give OWL RL a try? Of course, you might not get around to running all the test cases, and reporting back by July 30, but at this poing I’m no longer very worried about it.
Okay. One hour to flight time. That’s better. Have fun on the ground, everyone.
(topics I’m leaving out: Ian Horrocks taught a kick-ass class in description logics. The Jena team are pretty optimistic about their post-HP future (as am I). I’m sad that Jamie Taylor didn’t understand the need for 303 redirects.)
October 28, 2008
Here’s another middle-of-the-night-while-travelling post. It’s about OWL….
Last week, the W3C OWL Working Group decided that it was essentially done with the design of the OWL 2 language. All that remains is some editorial work and fixing bugs that might crop up. In W3C process terminology, the Working Group decided it was (mostly) ready for “Last Call” of its multi-part specification. Happily, it’s pretty much on-schedule
I’ve been the primary W3C staff contact for this Working Group, and I was the secondary staff contact for the OWL 1 Working Group for its last six months (back in 2003-2004). I’ve tried to support both groups however I could, in infrastructure, process advice, management advice, and in some limitted areas with technical design advice. But, awkwardly, I’m not really an OWL user, so I sometimes feel like this OWL work is only my “day job”.
That’s okay — lots of us do things for work that we’re not passionate about — but it isn’t how I usually operate. More interestingly, it’s kind of odd. I mean, OWL is on the very short list of Semantic Web standards, and I’m quite passionate about decentralization, which is more-or-less what the Semantic Web is about. So why am I not passionate about OWL?
Okay, there may be some personal reasons. I’ll describe them in the hope of factoring them out, and in the hope of generally learning more about how people interact with standards committees.
Although I was involved fairly near the beginning of work on OWL, I still feel late-to-the-party. For me, when I’m late to a party, I tend to have a nagging feeling that I missed something important, and I stay very cautious. I’m less likely to become enthusiastic.
Perhaps because of that, or perhaps because I’m not a Description Logics researcher, I don’t feel like part of the “in-crowd” — even though I know and like and feel personally comfortable with many people who are.
I don’t think I could ever get into the top tier of experts in either OWL implementation or OWL usage. There are some very, very smart people who have been dedicated to those subjects for years already. I doubt I could catch up.
These wouldn’t stop me from being an implementor (I did one strange OWL implementation called surnia) and/or user, but it makes it harder for me to feel ownership. Maybe that’s related to excitement.
The important issue here, though, is technical. Is OWL important for decentralization? Perhaps it is, but I’m not convinced. Really, my gut says “no”, the experts say “yes”, and I’m just confused. And so we end up with a blog post.
What purposes might OWL serve?
It might be a data interface definition language, along the lines of BNF, ASN.1, and XML Schema, except on some subtley-different level. Decentralized systems certainly need something like this — some computer format for defining the interfaces — but is it OWL? (cf my work last year on asn07 (rif wiki, rif e-mail)). Some people tell me it’s entirely unsuitable for this; some people say they’ve been using it for this for years.
It might give people some computer support for defining a vocabulary, something like a spell-checker helps people writing prose. I have the feeling this is where the core Ontology community is. In this view, OWL is there to help you find errors and get insights into your vocabulary specification (aka your ontology).
It might be used as a declarative programming language for expressing Semantic Web shims, the transformations needed so systems using related-but-different RDF vocabularies can interoperate. But maybe this is better done with RIF or conventional programming languages? This is the Ontology Mapping problem. Clearly OWL isn’t expressive enough to do all kinds of mapping, and it probably is the best option for trivial kinds of mapping (stuff that just uses owl:sameAs), but what about everything in between?
Sometimes in standards work there is constructive ambiguity in the spec and also in the charter. It makes the technology easier to sell, and draws in a wide base of potential support. What is this technology for? Well, it’s for lots of things. It might be just what you need! If the charter were more specific, picking out only points 1, 2, or 3, then we’d have a narrower spec, serving a narrower community. Maybe serving it better, but still with fewer economies of scale and network effects.
If we look back at the OWL Use Cases and Requirements, we see use cases somewhat in line with these three options, although taking a rather different approach. The first two are about using automated reasoning in mapping between vocabularies; that’s sort of number 3, but with human vocabularies à la wordnet. The other four are less clear cut. Number five look like classic decentralization, but I don’t see any case being made for OWL in there. Number six looks like classic planning; is OWL the best logic language for automated planning?
Anyway, I’d love to hear comments (below or via trackback/pingback) about what
you think OWL is good for.
Please note that I’m really not complaining about OWL. I’m fairly comfortable with it’s design, and quite happy with the design process. I’m just saying that I have trouble seeing why it’s as cool as some people seem to think it is. Maybe I’m missing something. Maybe my goals are different. Maybe it’s utility is exactly the same to both of us, and for whatever reason it doesn’t turn me on the same way. I’m just trying to understand that….