Application areas
Posts focusing on the use of database and analytic technologies in specific application domains. Related subjects include:
- Any subcategory
- (in Text Technologies) Specific application areas for text analytics
Facts and rumors
- Vertica is putting out a press release today touting its 100th customer, and talking of triple digit growth last year.
- Multiple sources have told me that the DATAllegro system is being thrown out of Dell, so evidently Dell is telling this to one and all. If that goes through, this would presumably leave TEOCO as DATAllegro’s single happy customer. (I haven’t checked with Microsoft for its view.)
- A rumor has it that Infiniband technology vendor Voltaire, Ltd. privately claims triple-digit sales of switches for Exadata 1 (I think that one would be one switch per Exadata installation, not per rack). Based just on a quick glance, this is far from confirmed by Voltaire’s earnings conference call transcripts or SEC filings. However, the most recent transcript does seem to indicate Voltaire got multiple Exadata deals in the telecommunications sector, and suggests some Exadata penetration in other sectors as well.
- I was told of a classified-agency user that has >1 petabyte of data on Exadata 1 and 600 terabytes or so on Netezza. My not-obviously-biased source says the agency is distinctly happier with Netezza than Exadata.
- Like ParAccel, Oracle just got dinged for TPC-related misbehavior.
- Rumor has it that Sun has no intention of helping ParAccel rerun its withdrawn TPC-H benchmark.
- ParAccel has withdrawn the claim from its home page to be the “CERTIFIED” price-performance leader. This seems to confirm that the claim was a reference to the TPC-H. In my opinion, that was a gross misrepresentation of what the TPC-H shows.
Oracle gives a few customer database size examples
In its recent quarterly conference call, Oracle said (as per the Seeking Alpha transcript):
AC Neilsen, for instance, we deployed a 45-terabyte data [mart], they called it; Adidas, 13 terabytes; Australian Bureau of Statistics, 250 terabytes; and of course, some of our high-end ones that you have probably heard of in the past, AT&T, 250 terabytes; Yahoo!, 700 terabytes — just gives you an idea of the size of the databases that are out there and how they are growing, and that’s driving the need for greater throughput.
I don’t know what’s being counted there, but I wouldn’t be surprised if those were legit user-data figures.
Some other notes:
- The Yahoo database is of course Yahoo’s first-generation data warehouse, which has been largely superseded by an internal system more than 10X that size. (Edit: Actually, Greg Rahn of Oracle says below that it’s a different database.)
- I’m keynoting the Netezza road show this month, and Nielsen is up there on stage touting Netezza. (Edit: Nielsen indeed does the overwhelming majority of its data warehousing on Netezza.)
- I’d be surprised if AT&T’s largest data warehouse were “only” 250 terabytes in size. (Edit: Actually, I am told the database in question is 310 TB of user data and growing. More later, hopefully.)
- Oracle didn’t exactly say that those were Exadata installations.
Categories: Analytic technologies, Data warehousing, Exadata, Netezza, Oracle, Specific users, Telecommunications, Web analytics, Yahoo | 10 Comments |
HadoopDB
Despite a thoughtful heads-up from Daniel Abadi at the time of his original posting about HadoopDB, I’m just getting around to writing about it now. HadoopDB is a research project carried out by a couple of Abadi’s students. Further research is definitely planned. But it seems too early to say that HadoopDB will ever get past the “research and oh by the way the code is open sourced” stage and become a real code line — whether commercialized, open source, or both.
The basic idea of HadoopDB is to put copies of a DBMS at different nodes of a grid, and use Hadoop to parcel work among them. Major benefits when compared with massively parallel DBMS are said to be:
- Open/cheap/free
- Query fault-tolerance
- The related concept of tolerating node degradation that isn’t an outright node failure.
HadoopDB has actually been built with PostgreSQL. That version achieved performance well below that of a commercial DBMS “DBX”, where X=2. Column-store guru Abadi has repeatedly signaled his intention to try out HadoopDB with VectorWise at the nodes instead. (Recall that VectorWise is shared-everything.) It will be interesting to see how that configuration performs.
The real opportunity for HadoopDB, however, in my opinion may lie elsewhere. Read more
Fault-tolerant queries
MapReduce/Hadoop fans sometimes raise the question of query fault-tolerance. That is — if a node fails, does the query need to be restarted, or can it keep going? For example, Daniel Abadi et al. trumpet query fault-tolerance as one of the virtues of HadoopDB. Some of the scientists at XLDB spoke of query fault-tolerance as being a good reason to leave 100s or 1000s of terabytes of data in Hadoop-managed file systems.
When we discussed this subject a few months ago in a couple of comment threads, it seemed to be the case that:
- Hadoop generally has query fault-tolerance. Intermediate result sets are materialized, and data isn’t tied to nodes anyway. So if a node goes down, its work can be sent to another node.
- Hive actually did not have query fault-tolerance at that time, but it was on the roadmap. (Edit: Actually, it did within a single MapReduce job. But one Hive job can comprise several rounds of MapReduce.)
- Most DBMS vendors do not have query fault-tolerance. If a query fails, it gets restarted from scratch.
- Aster Data’s nCluster, however, does appear to have some kind of query fault-tolerance.
This raises an obvious (pair of) question(s) — why and/or when would anybody ever care about query fault-tolerance? Read more
Categories: Analytic technologies, Aster Data, Data warehousing, Hadoop, Parallelization, Scientific research, Theory and architecture | 10 Comments |
Introduction to the XLDB and SciDB projects
Before I write anything else about the overlapping efforts known as XLDB and SciDB, I probably should explain and disambiguate what they are as best I can. XLDB was organized and still is run by guys who want to solve a scientific problem in eXtremely Large DataBase Management, most especially Jacek Becla of SLAC (the organization previously known as Stanford Linear Accelerator Center). Becla’s original motivation was that he needs a DBMS to manage what will be 55 petabytes of raw image data and 100 petabytes of astronomical data total for LSST (Large Synoptic Survey Telescope). Read more
Categories: Data models and architecture, Database diversity, eBay, Michael Stonebraker, Open source, Petabyte-scale data management, Scientific research, Theory and architecture | 2 Comments |
Sybase IQ business notes
As specialized analytic DBMS go, Sybase is near the top of the charts both in age (Sybase IQ was first introduced in the mid 1990s) and adoption. That’s even more true, of course, if we restrict the discussion strictly to columnar DBMS, aka column stores. Basic Sybase IQ adoption claims include:
- >1500 users
- >3000 installations (Sybase has variously cited 2.1 and 2.5+ as the installation/user ratio)
- At least ~50-60 installations with >5 terabytes of user data
Note that 98% of Sybase IQ installations are under 5 terabytes; the heart of Sybase IQ’s business is the sub-terabyte data warehouse market.* Read more
Categories: Analytic technologies, Data mart outsourcing, Data warehousing, Investment research and trading, Sybase | 3 Comments |
Vertica’s version of MapReduce integration
I talked with Omer Trajman of Vertica Monday night about Vertica’s MapReduce integration, part of its Vertica 3.5 release. Highlights included:
- By “integrating Vertica and MapReduce,” Vertica means “integrating Vertica and Hadoop.”
- Vertica’s Hadoop integration is based on Cloudera’s DBInputFormat.
- Omer called out for me several features of Vertica’s Hadoop integration that didn’t just come from Cloudera, namely:
- Cloudera’s DBInputFormat assumes the database runs on a single computer, or a single head node of an MPP system. Vertica’s technology, however, runs on peer parallel nodes with no head, and so Vertica adapted the DBInputFormat technology accordingly.
- Vertica lets you push down Map functions to the database. Omer reports a roughly even division among users and prospects between those who want to do this and ones who don’t.
- Vertica lets you do Reduce functions (or Map functions, if you don’t push them down to the database) on a separate cluster than you run the database software. Vertica asserts that its customers and prospects all want to do this. Right here is the big difference between Vertica’s MapReduce integration and Aster’s or Greenplum’s. (Aster would also say that Vertica’s weaker MapReduce/SQL programming integration is a big difference as well.)
- Indeed, Vertica lets you Reduce into a different DBMS than Vertica, if you choose.
- Vertica gives you flexibility on the size of the Map and Reduce clusters. Omer agreed with me when I said there were some limits on how fast one can add or subtract nodes in a Vertica grid, because there’s data redistribution involved. But one can add/change/delete Hadoop clusters extremely quickly.
Apparently, the use cases for Vertica/Hadoop integration to date lie in algorithmic trading and two kinds of web analytics. Specifically: Read more
Vertica customer notes
Dave Menninger of Vertica called to discuss NDA product futures, as vendors tend to do in the weeks before a TDWI conference. So we also talked a bit about the Vertica customer base. That’s listed as 86 at the end of Q2, up from 74 in Q1. That’s pretty small growth compared with Q1, which Dave didn’t fully explain. But then, off the top of his head, he was recalling Q1 numbers as being lower than that 74, so maybe there’s a reporting glitch in the loop somewhere.
Vertica’s two biggest customer segments are telecommunications and financial services, and Dave drew an interesting distinction between what the two groups care about. Telecom companies care about data warehouses that are big and 24/7 reliable, but don’t do particularly complex analytics. Financial services — by which he presumably means mainly proprietary traders — are most focused on complex and competitively innovative analytics.
Also mentioned in various contexts were web-based outfits such as data mart outsourcers, social networkers, and open-source software providers.
Vertica also offers customer win stories in other segments, but most actual discussion about what Vertica does revolves around the application areas mentioned above, just as it has been in the past.
Similar (not necessarily identical) generalizations would be true of many other analytic DBMS vendors.
Yahoo is up to 10 petabytes now?
According to somebody (I forget who) who attended Yahoo’s SIGMOD presentation last week, the big Yahoo database is now up to 10 petabytes in size, in line with Yahoo’s predictions last year. Apparently, Yahoo also gave more details of how the technology works.
Categories: Columnar database management, Data warehousing, Web analytics, Yahoo | 5 Comments |
MMO games are still screwed up in their database technology
Two years ago I wrote about the database technology of Guild Wars. Not coincidentally, Guild Wars was the MMO RPG (Massively Multiplayer Online Role-Playing Game) I then played. I had the chance to interview Guild Wars’ lead developers. While much else they had to say was impressive, Guild Wars’ database architecture was — er, it was rather mind-boggling.
Since then, Linda and I have taken to playing Lord of the Rings Online, commonly known as LOTRO, developed by Turbine, Inc.. I haven’t had the chance to interview any Turbine folks, despite repeated requests. But from afar, it would seem that Turbine’s technology choices leave quite a bit to be desired, in enterprise-like IT areas such as authentication, database management, and storage.
LOTRO and other Turbine games commonly are down, for scheduled maintenance or in some cases otherwise. There is scheduled multi-hour downtime to start many weeks. There are fairly frequent server restarts in addition to that. Lag and congestion are frequent. And so on and so forth. By way of contrast, Guild Wars very rarely goes down, and other technical difficulties are less common as well. Reliability is a key design goal for Guild Wars’ developers, and in my opinion they’ve achieved it.
Some of the reasons for Turbine’s difficulties seem related to the stresses of MMOs — e.g., they’re probably due to the problems caused by having many fictional characters moving through the same fictional space at once, with graphical detail much richer than Guild Wars’. But a couple of head-scratchers make me really wonder about how Turbine manages data. Read more
Categories: Fun stuff, Games and virtual worlds, Specific users | 18 Comments |