Data types

Analysis of data management technology optimized for specific datatypes, such as text, geospatial, object, RDF, or XML. Related subjects include:

July 3, 2006

Oracle, graphical data models, and RDF

I wrote recently of Cogito’s high-performance engine for modeling graphs. Oracle has taken a very different approach to the same problem, and last Monday I drove over to Burlington to be briefed on it.

Name an approach to data management, and Oracle has probably

(At least, that’s the general template; truth be told, most of the important cases deviate in some way or other.)
Read more

May 22, 2006

Introduction to Cogito

In my Computerworld column appearing today, I promised to post here about Cogito. Let me start with a disclosure and a confession: Read more

May 15, 2006

Philip Howard likes Viper

Philip Howard likes DB2’s Viper release. Truth be told, Philip Howard seems to like most products, whether they deserve it or not. But in this case, I think his analysis is spot-on.

May 13, 2006

Hot times at Intersystems

About a year ago, I wrote a very favorable column focusing on Intersystems’ OODBMS Cache’. Cache’ appears to be the one OODBMS product that has good performance even in a standard disk-centric configuration, notwithstanding that random pointer access seems to be antithetical to good disk performance.

Intersystems also has a hot new Cache’-based integration product, Ensemble. They attempted to brief me on it (somewhat belatedly, truth be told) last Wednesday. Through no fault of the product, however, the briefing didn’t go so well. I still look forward to learning more about Ensemble.

May 10, 2006

White paper on memory-centric data management — excerpt

Here’s an excerpt from the introduction to my new white paper on memory-centric data management. I don’t know why WordPress insists on showing the table gridlines, but I won’t try to fix that now. Anyhow, if you’re interested enough to read most of this excerpt, I strongly suggest downloading the full paper.

Introduction

Conventional DBMS don’t always perform adequately.

Ideally, IT managers would never need to think about the details of data management technology. Market-leading, general-purpose DBMS (DataBase Management Systems) would do a great job of meeting all information management needs. But we don’t live in an ideal world. Even after decades of great technical advances, conventional DBMS still can’t give your users all the information they need, when and where they need it, at acceptable cost. As a result, specialty data management products continue to be needed, filling the gaps where more general DBMS don’t do an adequate job.

Memory-centric technology is a powerful alternative.

One category on the upswing is memory-centric data management technology. While conventional DBMS are designed to get data on and off disk quickly, memory-centric products (which may or may not be full DBMS) assume all the data is in RAM in the first place. The implications of this design choice can be profound. RAM access speeds are up to 1,000,000 times faster than random reads on disk. Consequently, whole new classes of data access methods can be used when the disk speed bottleneck is ignored. Sequential access is much faster in RAM, too, allowing yet another group of efficient data access approaches to be implemented.

It does things disk-based systems can’t.

If you want to query a used-book database a million times a minute, that’s hard to do in a standard relational DBMS. But Progress’ ObjectStore gets it done for Amazon. If you want to recalculate a set of OLAP (OnLine Analytic Processing) cubes in real-time, don’t look to a disk-based system of any kind. But Applix’s TM1 can do just that. And if you want to stick DBMS instances on 99 nodes of a telecom network, all persisting data to a 100th node, a disk-centric system isn’t your best choice – but Solid’s BoostEngine should get the job done.

Memory-centric data managers fill the gap, in various guises.

Those products are some leading examples of a diverse group of specialist memory-centric data management products. Such products can be optimized for OLAP or OLTP (OnLine Transaction Processing) or event-stream processing. They may be positioned as DBMS, quasi-DBMS, BI (Business Intelligence) features, or some utterly new kind of middleware. They may come from top-tier software vendors or from the rawest of startups. But they all share a common design philosophy: Optimize the use of ever-faster semiconductors, rather than focusing on (relatively) slow-spinning disks.

They have a rich variety of benefits.

For any technology that radically improves price/performance (or any other measure of IT efficiency), the benefits can be found in three main categories:

  • Doing the same things you did before, only more cheaply;
  • Doing the same things you did before, only better and/or faster;
  • Doing things that weren’t technically or economically feasible before at all.

For memory-centric data management, the “things that you couldn’t do before at all” are concentrated in areas that are highly real-time or that use non-relational data structures. Conversely, for many relational and/or OLTP apps, memory-centric technology is essentially a much cheaper/better/faster way of doing what you were already struggling through all along.

Memory-centric technology has many applications.

Through both OEM and direct purchases, many enterprises have already adopted memory-centric technology. For example:

  • Financial services vendors use memory-centric data management throughout their trading systems.
  • Telecom service vendors use memory-centric data management in multiple provisioning, billing, and routing applications.
  • Memory-centric data management is used to accelerate web transactions, including in what may be the most demanding OLTP app of all — Amazon.com’s online bookstore.
  • Memory-centric data management technology is OEMed in a variety of major enterprise network management products, including HP Openview.
  • Memory-centric data management is used to accelerate analytics across a broad variety of industries, especially in such areas as planning, scenarios, customer analytics, and profitability analysis.

May 2, 2006

DBMS2 at IBM

I had a chat a couple of weeks ago with Bob Picciano, who runs servers (i.e., DBMS) for IBM. I came away feeling that, while they don’t use that name, they’re well down the DBMS2 path. By no means is this SAP’s level of commitment; after all, they have to cater to traditional technology strategies as well. But they definitely seem to be getting there.

Why do I say that? Well, in no particular order:

The big piece of a DBMS2 strategy that IBM seems to be lacking is a data-oriented services repository. IBM has had disasters in the past with over-grand repository plans, so they’re treading cautiously this time around. There also might be an organizational issue; DBMS and integration technology sit in separate divisions, and I doubt it’s yet appreciated throughout IBM how central data is to an SOA strategy.

But that not-so-minor detail aside, IBM definitely seems to be developing a DBMS2-like technology vision.

April 10, 2006

Marklogic’s experiences — from the warhorse’s mouth!

Another subject I meant to blog about is what all I’ve learned from Mark Logic about customer uses for XML.

Well, I have a great workaround for that one. Mark Logic CEO Dave Kellogg has revved up what I think is the most interesting vendor-exec blog I’ve seen. So if you’re interested in search/publishing-style uses for native XML, I strongly encourage you to go browse his blog. (And he writes about a lot of other interesting stuff as well.)

April 10, 2006

IBM’s definition of native XML

IBM’s recent press release on Viper says:

Viper is expected to be the only database product able to seamlessly manage both conventional relational data and pure XML data without requiring the XML data to be reformatted or placed into a large object within the database.

That, so far as I know, is true, at least among major products.

I’m willing to apply the “native” label to Microsoft’s implementation anyway, because conceptually there’s little or no necessary performance difference between their approach and IBM’s. (Dang. I thought I posted more details on that months ago. I need to remedy the lack soon.)

As for Oracle — well, right now Oracle has a bit of a competitive problem

April 6, 2006

Oracle is getting touchy about XML

From Barbara Darrow’s “Unblog”:

“How we store XML on the database is, excuse me, none of your business. The point is you can write an app using XML standards,” said Mark Drake, manager of product management for XML technology for the Redwood Shores, Calif. vendor.

“Whether we shred it, parse it, it doesn’t matter. There is no such thing as a native XML storage model, there is no W3c standard or 11th stone tablet, telling us how,” he noted.

So implementation doesn’t matter? I.e., performance doesn’t matter?

That’s not generally Oracle’s viewpoint in areas where it has a performance or implementation advantage, or even parity …

March 14, 2006

Software AG’s Tamino?

Software AG consultant Jose Huerga reminded me that Software AG has been selling XML database managers for a long time, and that they are now up to Release 4.4 of Tamino.

Personally, I’m out of touch with Software AG (e.g., I last visited Darmstadt in 1984). Would anybody care to share knowledge of or experiences with this product?

← Previous PageNext Page →

Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.