Sir Tim Berners-Lee is the person who invented the world wide web, as well as some of the technologies that make it work. That was years ago, and yet a surprisingly large amount of technology that the web runs on is still in wide use today. See that "http://" up in your address bar? That's Tim's doing. In fact, I just read an interview where he said if he were to do it all over again, he'd remove the "//", because they serve no practical purpose.
So he gave a talk at Princeton University, and I attended the talk. He went far deeper into his 'semantic web" ideas than I thought he would, but I was glad to get a real explanation of it from the source. I'd heard about semantic web before, but never really got it. Now I get it, and I understand a little bit why it's not really taking over how we think about data on the web.
The general idea seems to be that we could have sort of a common, globally linked, machine accessible, distributed database. Well, that's one way to think about it, anyway. One problem this solves is it gets us developers out of screen scraping and dealing with data in the context of "content", and dealing with it as simple data that we can do whatever we want with. The data would be described using agreed upon, standardized rules, and so you also get around the problems arising from trying to develop to someone's (perhaps arcane) notion of what it means to describe data using XML. Theoretically, this does open doors if you think about it long enough.
So in the end, I left the talk not overly impressed with semantic web for a couple of reasons. First, I'm still not certain how this global data is linked. Further, while any particular piece of data is well structured and well defined, the architecture of this global database would seem to be a bit dodgy. OK, so I have a friend-of-a-friend file on someone, and that could link to another website, an email address, and maybe a map that shows where they live, their flickr page, and all that, but now I need some kind of tool to help me to search for all of the people who live in Wyoming and who have links on their del.icio.us page that overlap with some of mine, and who have a GMail account so I can chat with them. The tools, as far as I can tell, aren't there.
"Well, there needs to be a killer app, and that's not here yet, because people don't know about it yet." Well, normally that would probably be valid. Not everyone has a del.icio.us account, a flickr account, a gmail account, or a homepage for that matter, because some people just haven't learned about those things yet. But the big difference here is that, while GMail is something like 2 years old, and flickr is around that age as well, semantic web is something like 10 years old, and they're only just now working on a mechanism for querying all of this wonderful data!
Well, this explains a lot. How, exactly, do you develop tools to work with data when there isn't a well defined way to query the data? And even when that *does* exist, what will I be interfacing with to provide responses to those queries? Who owns that interface?
Another concern is that all of these tools are, as I see it, destined to be focused on a specific task that deals with some subset of the data. Like the FOAF explorer tool, which collects links to FOAF files, which is data about people. You can probably link off to location data about that person, which is nice, but now I'm staring at maps.google.com — another tool that is task specific. So the tools are dealing with some subset of the data, and for each tool that does that, the developer has to get to know the rules for that subset of data. Gee, this is starting to sound a lot like ASN.1, which is the basis for SNMP and LDAP, which are worthy data sources in themselves!
In addition, the assumption that entities want to share data globally doesn't necessarily hold. In many instances, data is controlled by an entity, and is subscribed to by another entity, who can then write applications against this data. The data is described using XML, there's a supplier DTD, so you do a little homework to get to know the data, and then you write code to basically create a web services client, and you're all set.
I'm more than certain that Sir Tim would have perfectly satisfactory answers to all of this, and if anyone else can help me 'get it' I invite your comments.