Database compression
Analysis of technology that compresses data within a database management system. Related subjects include:
Notes on data warehouse appliance prices
I’m not terribly motivated to do a detailed analysis of data warehouse appliance list prices, in part because:
- Everybody knows that in practice data warehouse appliances tend to be deeply discounted from list price.
- The only realistic metric to use for pricing data warehouse appliances is price-per-terabyte, and people have gotten pretty sick of that one.
That said, here are some notes on data warehouse appliance prices. Read more
Categories: Data warehouse appliances, Data warehousing, Database compression, EMC, Exadata, Greenplum, Netezza, Oracle, Pricing | 8 Comments |
Aster Data nCluster Version 4.6
The main thing in Aster Data nCluster Version 4.6 is Aster’s version of hybrid row-column store technology. Technical highlights include:
- Aster Data is simply taking the number of storage options in nCluster up from 1 to 2 – you now can store a table either in the Aster Data nCluster row store or column store.
- In fact, you can store parts of a table in the Aster Data nCluster row store and other parts in the Aster Data nCluster column store. I‘m a bit foggy on the details of that – Aster makes discussions of partitioning more complicated than they need to be — but it definitely sounds pretty flexible. Edit: See comment thread below.
- Anything you can do with the Aster Data nCluster row store you can also do with the Aster Data nCluster column store. In particular, that includes all of Aster Data’s analytic functionality.
- The same is true vice-versa. There is no columnar-oriented kind of compression in Aster Data nCluster at this time.
So Aster Data has now joined Greenplum/EMC among row-based analytic DBMS vendors with hybrid row-column stores. Oracle will join them some day, and the same probably applies to other row-based vendors as well. Similarly, Aster Data will probably join Oracle some day in having columnar compression. And so this all fits the model:
- Aster Data has an impressively competitive analytic relational DBMS, considering the youth and size of the company.
- Aster Data is a leader in extending its analytic relational DBMS by integrating in other analytic processing capabilities.
Categories: Analytic technologies, Aster Data, Columnar database management, Data warehousing, Database compression | 4 Comments |
More on temp space, compression, and “random” I/O
My PhD was in a probability-related area of mathematics (game theory), so I tend to squirm when something is described as “random” that clearly is not. That said, a comment by Shilpa Lawande on our recent flash/temp space discussion suggests the following way of framing a key point:
- You really, really want to have multiple data streams coming out of temp space, as close to simultaneously as possible.
- The storage performance characteristics of such a workload are more reminiscent of “random” than “sequential” I/O.
If everybody else is cool with it too, I can live with that. 🙂
Meanwhile, I talked again with Tim Vincent of IBM this afternoon. Tim endorsed the temp space/Flash fit, but with a different emphasis, which upon review I find I don’t really understand. The idea is:
- Analytic DBMS processing generally stresses reads over writes.
- Temp space is an exception — read and write use of temp space is pretty balanced. (You spool data out once, you read it back in once, and that’s the end of that; next time it will be overwritten.)
My problem with that is: Flash typically has lower write than read IOPS (I/O per second), so being (relatively) write-intensive would, to a first approximation, seem if anything to disfavor a workload for flash.
On the plus side, I was reminded of something I should have noted when I wrote about DB2 compression before:
Much like Vertica, DB2 operates on compressed data all the way through, including in temp space.
Categories: Data warehousing, Database compression, IBM and DB2, Vertica Systems | 6 Comments |
Vertica’s innovative architecture for flash, plus more about temp space than you perhaps wanted to know
Vertica is announcing:
- Technology it already has released*, but has not published any reference architectures for.
- A Barney partnership.**
In other words, Vertica has succumbed to the common delusion that it’s a good idea to put out half-baked press releases the week of TDWI conferences. But if we look past that kind of all-too-common nonsense, Vertica is highlighting an interesting technical story, about how the analytic DBMS industry can exploit solid-state memory technology.
*Upgrades to Vertica FlexStore to handle flash memory, actually released as part of Vertica 4.0
** With Fusion I/O
To set the context, let’s recall a few points I’ve noted in the past:
- Solid-state memory’s price/throughput tradeoffs obviously make it the future of database storage.
- The flash future is coming soon, in part because flash’s propensity to wear out is overstated. This is especially true in the case of modern analytic DBMS, which tend to write to blocks all at once, and most particularly the case for append-only systems such as Vertica.
- Being able to intelligently split databases among various cost tiers of storage – e.g. flash and disk – makes a whole lot of sense.
Taken together, those points tell us:
For optimal price/performance, analytic DBMS should support databases that run part on flash, part on disk.
While all this is a future for some other analytic DBMS vendors, Vertica is shipping it today.* What’s more, three aspects of Vertica’s architecture make it particularly well-suited for hybrid flash/disk storage, in each case for a similar reason – you can get most of the performance benefit of all-flash for a relatively low actual investment in flash chips: Read more
Categories: Columnar database management, Data warehousing, Database compression, Solid-state memory, Vertica Systems | 10 Comments |
The Netezza and IBM DB2 approaches to compression
Thursday, I spent 3 ½ hours talking with 10 of Netezza’s more senior engineers. Friday, I talked for 1 ½ hours with IBM Fellow and DB2 Chief Architect Tim Vincent, and we agreed we needed at least 2 hours more. In both cases, the compression part of the discussion seems like a good candidate to split out into a separate post. So here goes.
When you sell a row-based DBMS, as Netezza and IBM do, there are a couple of approaches you can take to compression. First, you can compress the blocks of rows that your DBMS naturally stores. Second, you can compress the data in a column-aware way. Both Netezza and IBM have chosen completely column-oriented compression, with no block-based techniques entering the picture to my knowledge. But that’s about as far as the similarity between Netezza and IBM compression goes. Read more
Categories: Data warehousing, Database compression, IBM and DB2, Microsoft and SQL*Server, Netezza | 17 Comments |
Netezza’s silicon balance
As I’ve mentioned in a couple of other posts, Netezza is stressing that the most recent wave of its technology is software-only, with no hardware upgrades made or needed. In other words, Netezza boxes already have all the silicon they need. But of course, there are really at least three major aspects to the Netezza silicon story – FPGA (Field-Programmable Gate Array), CPU, and RAM.
- Netezza planned to be “generous” in its original TwinFin FPGA capacity, anticipating software upgrades like the ones it’s introducing now. It is satisfied that this strategy worked. More on this below.
- The same surely applies to CPU.
- What’s more, I get the sense that the CPU turned out in practice to be even more over-provisioned than they anticipated …
- … at least when one just considers Netezza’s base NPS software.
- However, I suspect that if the advanced analytics capability takes off, Netezza will determine that more CPU is always better.
- And by the way, NEC is making versions of Netezza appliances with more advanced chips than Netezza is. So if anybody should really, really need more CPU in their Netezza boxes, there’s a very straightforward way to make that happen. (And if there were nontrivial demand for that, appropriate support plans could surely be structured.)
- Everybody needs to be careful about RAM. Netezza is surely no exception.
The major parts of Netezza’s FPGA software are:
- Compress Engine 2. This is Netezza’s new way of doing compression.
- Compress Engine 1. This is Netezza’s old way of doing compression. It is being kept around so that existing Netezza tables don’t suddenly have to be changed or reloaded.
- Project Engine. Guess what this does.
- Restrict Engine. Ditto.
- Visibility Engine. This enforces ACID and handles row-level security. It is “sort of a corner of” the Restrict Engine (Actually, Netezza seems to waver as to whether to describe “Restrict” and “Visibility” as being two engines or one.)
- Miscellaneous plumbing.
If I understood correctly, each Netezza FPGA has two each of the engines in parallel.
Related link
- An August, 2009 post on what Netezza does in its FPGA
Categories: Data warehouse appliances, Data warehousing, Database compression, Netezza, Theory and architecture | Leave a Comment |
The underlying technology of QlikView
QlikTech* finally decided both to become a client and, surely not coincidentally, to give me more technical detail about QlikView than it had when last we talked a couple of years ago. Indeed, I got to spend a couple of hours on the phone not just with Anthony Deighton, but also with QlikTech’s Hakan Wolge, who wrote 70-80% of the code in QlikView 1.0, and remains in effect QlikTech’s chief architect to this day.
*Or, as it now appears to be called, Qlik Technologies.
Let’s start with some quick reminders:
- QlikTech makes QlikView, a widely popular business intelligence (BI) tool suite.
- QlikView is distinguished by the flexibility of navigation through its user interface.
- To support this flexibility, QlikView preloads all data you might want to query into memory.
Let’s also dispose of one confusion right up front, namely QlikTech’s use of the word associative: Read more
Categories: Business intelligence, Database compression, Memory-centric data management, QlikTech and QlikView | 36 Comments |
Ingres VectorWise technical highlights
After working through problems w/ travel, cell phones, and so on, Peter Boncz of VectorWise finally caught up with me for a regrettably brief call. Peter gave me the strong impression that what I’d written in the past about VectorWise had been and remained accurate, so I focused on filling in the gaps. Highlights included: Read more
Categories: Actian and Ingres, Analytic technologies, Benchmarks and POCs, Columnar database management, Data warehousing, Database compression, Open source, VectorWise | 2 Comments |
Algebraix
I talked Friday with Chris Piedemonte and Gary Sherman, respectively the Cofounder/CTO and Chief Mathematician of Algebraix, who hooked up together for this project back in 2003 or 2004. (Algebraix is the company formerly known as XSPRADA.) Algebraix makes an analytic DBMS, somewhat based on the ideas of extended set theory, that runs on SMP (Symmetric MultiProcessing) boxes. Like all analytic DBMS vendors, Algebraix has on some occasions run some queries orders of magnitude faster than they ran on the systems users were looking to replace.
Algebraix’s secret sauce is that the DBMS keeps reorganizing and recopying the data on disk, to optimize performance in response to expected query patterns (automatically inferred from queries it’s seen so far). This sounds a lot like the Infobright story, with some of the more obvious differences being: Read more
Categories: Algebraix, Data warehousing, Database compression, Infobright, Theory and architecture | 3 Comments |
More on Sybase IQ, including Version 15.2
Back in March, Sybase was kind enough to give me permission to post a slide deck about Sybase IQ. Well, I’m finally getting around to doing so. Highlights include but are not limited to:
- Slide 2 has some market success figures and so on. (>3100 copies at >1800 users, >200 sales last year)
- Slides 6-11 give more detail on Sybase’s indexing and data access methods than I put into my recent technical basics of Sybase IQ post.
- Slide 16 reminds us that in-database data mining is quite competitive with what SAS has actually delivered with its DBMS partners, even if it doesn’t have the nice architectural approach of Aster or Netezza. (I.e., Sybase IQ’s more-than-SQL advanced analytics story relies on C++ UDFs — User Defined Functions — running in-process with the DBMS.) In particular, there’s a data mining/predictive analytics library — modeling and scoring both — licensed from a small third party.
- A number of the other later slides also have quite a bit of technical crunch. (More on some of those points below too.)
Sybase IQ may have a bit of a funky architecture (e.g., no MPP), but the age of the product and the substantial revenue it generates have allowed Sybase to put in a bunch of product features that newer vendors haven’t gotten around to yet.
More recently, Sybase volunteered permission for me to preannounce Sybase IQ Version 15.2 by a few days (it’s scheduled to come out this week). Read more