DBMS product categories
Analysis of database management technology in specific product categories. Related subjects include:
Groovy Corp puts out a ridiculous press release
I knew Groovy Corp’s press release today would be bad, as it was pitched in advance as being about an awe-inspiring benchmark. That part met my very low expectations, emphasizing how the Groovy SQL Switch massively outperformed MySQL* in a benchmark, and how this supposedly shows the Groovy SQL Switch would outperform every other competitive RDBMS by at least similar margins.
*While a few use cases are exceptions, being “better than MySQL” for a DBMS is basically like being “better than Pabst Blue Ribbon” for a beer. Unless price is your top consideration, why are you even making the comparison?
Even worse, the press release, from its subhead and very first sentence, emphasizes the claim “the Groovy SQL Switch’s ability to significantly outperform relational databases.” As CEO Joe Ward quickly agreed by email, that’s not accurate. As you would expect from the “SQL” in its name, the Groovy SQL Switch is just as relational as the products it’s being contrasted to. Unfortunately for Joe, who I gather aspires to edit it to say something more sensible, the press release is out already in multiple places.
More favorably, Renee Blodgett has a short, laudatory post about Groovy, with some kind of embedded video.
Categories: Groovy Corporation, In-memory DBMS, Memory-centric data management, MySQL, OLTP | 17 Comments |
What are the best choices for scaling Postgres?
March, 2011 edit: In its quaintness, this post is a reminder of just how fast Short Request Processing DBMS technology has been moving ahead. If I had to do it all over again, I’d suggest they use one of the high-performance MySQL options like dbShards, Schooner, or both together. I actually don’t know what they finally decided on in that area. (I do know that for analytic DBMS they chose Vertica.)
I have a client who wants to build a new application with peak update volume of several million transactions per hour. (Their base business is data mart outsourcing, but now they’re building update-heavy technology as well. ) They have a small budget. They’ve been a MySQL shop in the past, but would prefer to contract (not eliminate) their use of MySQL rather than expand it.
My client actually signed a deal for EnterpriseDB’s Postgres Plus Advanced Server and GridSQL, but unwound the transaction quickly. (They say EnterpriseDB was very gracious about the reversal.) There seem to have been two main reasons for the flip-flop. First, it seems that EnterpriseDB’s version of Postgres isn’t up to PostgreSQL’s 8.4 feature set yet, although EnterpriseDB’s timetable for catching up might have tolerable. But GridSQL apparently is further behind yet, with no timetable for up-to-date PostgreSQL compatibility. That was the dealbreaker.
The current base-case plan is to use generic open source PostgreSQL, with scale-out achieved via hand sharding, Hibernate, or … ??? Experience and thoughts along those lines would be much appreciated.
Another option for OLTP performance and scale-out is of course memory-centric options such as VoltDB or the Groovy SQL Switch. But this client’s database is terabyte-scale, so hardware costs could be an issue, as of course could be product maturity.
By the way, a large fraction of these updates will be actual changes, as opposed to new records, in case that matters. I expect that the schema being updated will be very simple — i.e., clearly simpler than in a classic order entry scenario.
The Groovy SQL Switch
I’ve now had a chance to talk with Groovy Corporation CEO Joe Ward, and can add to what Groovy advisor Tony Bain wrote about Groovy Corp and its SQL Switch DBMS. Highlights include: Read more
Categories: Groovy Corporation, In-memory DBMS, Memory-centric data management, OLTP | 2 Comments |
XtremeData announces its DBx data warehouse appliance
XtremeData is announcing its DBx data warehouse appliance today. Highlights include: Read more
Categories: Benchmarks and POCs, Data warehouse appliances, Data warehousing, Pricing, XtremeData | 34 Comments |
Netezza on concurrency and workload management
I visited Netezza Friday for what was mainly an NDA meeting. But while I was there I asked where Netezza stood on concurrency, workload management, and rapid data mart spin-out. Netezza’s claims in those regards turned out to be surprisingly strong.
In the biggest surprise, Netezza claimed at least one customer had >5,000 simultaneous users, and a second had >4,000. Both are household names. Other unspecified Netezza customers apparently also have >1,000 simultaneous users. Read more
Categories: Data warehouse appliances, Data warehousing, Netezza, Teradata, Theory and architecture, Workload management | 13 Comments |
Update on Microsoft’s Madison and Fast Track data warehouse products
I chatted with Stuart Frost of Microsoft yesterday. Stuart is and remains GM of Microsoft’s data warehouse product unit, covering about $1 billion or so of revenue. While rumors of Stuart’s departure from Microsoft are clearly exaggerated, it does seem that his role is more one of coordination than actual management.
Microsoft Madison availability remains scheduled for H1 2010. Nothing new there. Tangible progress includes a few customer commitments of various sorts, including one outright planned purchase (due to some internal customer considerations around using up a budget). At the moment various Microsoft Madison technology “previews” are going on, which seem to amount to proofs-of-concept, that:
- Start with actual customer data (some from Microsoft, some from outside)
- Generate larger synthesized data sets based on those (database size seems to be 10-100 TB)
- Run in Microsoft data centers or “technology centers”, rather than on customer premises.
The basic Microsoft Madison product distribution strategy seems to be: Read more
Groovy Corp
Groovy Corp sent over a press release and apparently suggested I write about the company’s wonderfulness immediately. This was without any kind of briefing. I don’t do that kind of thing.
However, a Twitter check revealed that Tony Bain is familiar with Groovy Corp and the Groovy SQL Switch (apparently they started out in Australia, where he lives and works, and he evidently knows the guys). Tony’s take, in summary, is (emphasis mine):
- They are an in memory RDBMS
- They have worked with Intel to architect from the ground up for large multi processor concurrency
- Initially they are launching as a multi-core appliance
- They claim 200,000 sql operations per second from a single box
- They are proprietary (not built on MySQL or any other open source database) which means they have had a lot of control around their architecture
- They are a pretty cool company with some interesting people
There’s a little more detail at the above link.
Categories: DBMS product categories, Groovy Corporation, In-memory DBMS, Memory-centric data management, OLTP | 3 Comments |
Oracle cites Exadata wins
A couple of weeks ago, Oracle put out a press release about Exadata wins. Highlights include:
- 20 names of actual customers.
- One quote citing a competitive win (over Netezza)
- One quote citing a ~50X speedup of one query “without manual tuning”
- One quote citing consistent 10-72X query performance speedups
- One quote citing a speedup from “days” to “minutes”
Unless I missed it, none of the quotes implied Exadata was actually in production, and none compared hardware between the old/slow/production and Exadata/fast/test systems.
Categories: Data warehouse appliances, Data warehousing, Exadata, Market share and customer counts, Netezza, Oracle | Leave a Comment |
Hasso Plattner calls for in-memory OLTP column stores
Former SAP CEO Hasso Plattner has written a paper called A Common Database Approach for OLTP and OLAP Using an In-Memory Column Database, in association with a SIGMOD keynote address.* The approach Plattner advocates is an MPP in-memory column store, presumably somewhat akin to SAP’s frequently renamed Business Warehouse Accelerator/Business Intelligence Accelerator/BWA/BIA/Son-of-TREX technology. There also are strong similarities to the MPP in-memory row store project H-Store/VoltDB, although I don’t know whether Plattner would go so far as to adopt the H-Store view that all transactions should run in stored procedures. Unsurprisingly, SAP applications are used as the OLTP paradigm throughout.
*Thanks to Dave Kellogg for tipping me off to Plattner’s paper. I only went to two SIGMOD sessions, neither of which was Plattner’s. Nobody actually mentioned Plattner’s talk to me when I was down at SIGMOD.
Perhaps the most interesting part is Plattner’s claim that what’s demanding about OLTP isn’t database updating per se, but rather maintaining aggregates for quick-response analytics. In his main example of that point, Plattner proposes a real-life “more than 18” table schema, of which 2 are base tables, and (most of?) the rest are materialized views that his proposed database architecture dispenses with (because analytic performance is sufficiently good without them). Thus, Plattner’s core columnar argument seemingly is
columnar –> natively fast analytics –> no need to maintain aggregates –> much lower update burden.
That said — if Plattner’s paper contained a clear statement of how much more expensive it is to insert or update a single row in a columnar vs. row-based system, I overlooked it. Instead, Plattner seems to be arguing that the volume of base-table updates is low enough that — whatever it may be — column-store update overhead is an acceptable price to pay. (At one point he claims that only 5% of the data inserted in a financial application ever gets changed.) That may actually be true in a financial accounting system, but seems more questionable in a sufficiently large application that gets its updates from automatic devices, or from the consumer web.
Other highlights include: Read more
NoSQL?
Eric Lai emailed today to ask what I thought about the NoSQL folks, and especially whether I thought their ideas were useful for enterprises in general, as opposed to just Web 2.0 companies. That was the first I heard of NoSQL, which seems to be a community discussing SQL alternatives popular among the cloud/big-web-company set, such as BigTable, Hadoop, Cassandra and so on. My short answers are:
- In most cases, no.
- Most of these technologies are designed for simple, high-volume OLTP (OnLine Transaction Processing.) Most large enterprises have an established way of doing OLTP, probably via relational database management systems. Why change?
- MapReduce is an exception, in that it’s designed for analytics. MapReduce may be useful for enterprises. But where it is, it probably should be integrated into an analytic DBMS.
- There’s one big countervailing factor to all these generalities — schema flexibility.
As for the longer form, let me start by noting that there are two main kinds of reason for not liking SQL. Read more