May 7, 2007
More academic hype about the Semantic Web
A major Semantic Web researcher has built a cluster that can do RDF queries, and hence can get subsecond response time on queries against a database of 7 billion three-column records, The Register obsequiously reports. Golly gee whiz wow.
“The importance of this breakthrough cannot be overestimated,” said Professor Stefan Decker, director of DERI.”
I actually think the Semantic Web contains some good ideas, but this kind of over-the-top breathlessness doesn’t seem to do anybody very much good.
Categories: RDF and graphs
Subscribe to our complete feed!
Comments
3 Responses to “More academic hype about the Semantic Web”
Leave a Reply
Semantic web is certainly a great vision but lacks immediate benefits to its followers. There has been adaptation seen from both – Industry and Academia; but still when it is going to be realized is yet unanswered.
A comprehensive take on Semantic web and Web 3.0 at http://techbiz.blog.com/1730241/
Hi,
thanks for your comments. In fact, 7 billion was not the upper limit, but just where we stopped doing experiments.
All of the open source software prototypes for RDF databases run on 1 machines and have scalability problems.
Our solution allows to go beyond the current implemtations – if from a database researcher point of view the result is trivial is something I can’t judge.
If it is trivial I am wondering why we needed to create a solution – we created it because there was nothing available, but we experienced high demand (already coming from our own research), but also demand from others.
I am absolutly open to discussions – if somebody can point me to a better, more scalable solution I am happy to learn.
In fact, the Semantic Web community would be happy to have more people with a strong database background involved.
Best regards,
Stefan Decker
Hi Stefan!
As you surely know, Oracle — admittedly hardly open source — is heavily promoting an RDF offering. But it’s really little more than a three-column table, suitable to being (auto)joined to itself. When space permits, they precompute this join via their materialized view capability, although depending on the network that can of course lead to combinatorial explosion if one isn’t careful. The same thing could of course be done in PostgreSQL, to name a fairly popular and robust open source relational DBMS alternative, or MySQL, to name a yet more popular one. Or on any of the specialty data warehouse DBMS/appliance products I write about here.
I don’t see why there would be any problem managing a few terabytes in a very low-cost open source way, or 100+ terabytes reasonably inexpensively, at least by commercial standards of “reasonably”.
Here’s what I previously posted on the Oracle offering: http://www.dbms2.com/2006/07/03/oracle-graphical-data-models-and-rdf/
What am I missing in your work that outclasses all of this?
Best regards,
CAM