August 12, 2008
Compare/constrast of Vertica, ParAccel, and Exasol
I talked with Exasol today – at 5:00 am! — and of course want to blog about it. For clarity, I’d like to start by comparing/contrasting the fundamental data structures at Vertica, ParAccel, and Exasol. And it feels like that should be a separate post. So here goes.
- Exasol, Vertica, and ParAccel all store data in columnar formats.
- Exasol, Vertica, and ParAccel all compress data heavily.
- Exasol and Vertica operate on in-memory data in compressed formats. ParAccel decompresses the data when it gets to RAM. Exasol, Vertica, and ParAccel all — perhaps to varying extents — operate on in-memory data in compressed formats.
- ParAccel and Exasol write data to what amounts to the in-memory part of their basic data structures; the data then gets persisted to disk. Vertica, however, has a separate in-memory data structure to accept data and write it to disk.
- Vertica is a disk-centric system that doesn’t rely on there being a lot of RAM.
- ParAccel can be described that way too; however, in some cases (including on the TPC-H benchmarks), ParAccel recommends loading all your data into RAM for maximum performance.
- Exasol is totally optimized for the assumption that queries will be run against data that had already been previously loaded into RAM.
Beyond the above, I plan to discuss in a separate post how Exasol does MPP shared-nothing software-only columnar data warehouse database management differently than Vertica and ParAccel do shared-nothing software-only columnar data warehouse database management. 🙂
Categories: Columnar database management, Data warehousing, Database compression, Exasol, ParAccel, Vertica Systems
Subscribe to our complete feed!
Comments
12 Responses to “Compare/constrast of Vertica, ParAccel, and Exasol”
Leave a Reply
from your blog “Response to Rita Sallam …”, I note your comment: “TPCs are a joke too.”
If so, why bother to mention them here?
Interesting, since Vertica in comparison is much more open in “revealing” what goes behind the scenes as compared to the other two where what goes on is very very very tough to get from their website. I knew a lot more from your site as well as from another blog site called fulltablescan. That is one reason why i have a soft corner for Vertica. BTW .. what do you think about Pentaho ? i did not see any comments from you anywhere in your site !
Doug,
Perhaps I should have said that most conclusions drawn from TPCs are jokes. I wouldn’t say that TPCs provide no evidence for any claim at any time.
But if you think about it, in the post I mainly was illustrating what TPCs did NOT show — namely, great disk-centric performance for ParAccel. They may have it, but the TPCs don’t show that, because the TPCs weren’t done on a disk-centric configuration.
CAM
I haven’t talked w/ Pentaho. Both Lance Walter and I have been guilty at various times over the past year of being slow getting back to each other. I’m the guiltier of the two.
CAM
WRT [possible] great disk-centric performance for ParAccel:
I’m quite certain they don’t have it, at least not yet. This is why they only used memory based solutions at the very bottom of the scale factors (100GB, 300GB and 1000GB).
It would seem that the memory based solutions (ParAccel, Exasol) are only effective if all of the required data is in memory. For example take this customer benchmark from Exasol. The first run of the queries took 20 minutes compared to 24 minutes on the customer’s existing system. The explanation from Exasol is that during the first run EXASolution “completely reorganized the internal data and performed internal optimizations, e.g. it generated an index”. Of course, the subsequent runs took significantly less time, but let’s be realistic, that is not a new trick. Both DB2 and Oracle have features (Query Patroller and Results Cache) that can just return the result if a given query is run more than once. IMO so called customer benchmarks where the same queries are executed more than once are quite unimpressive.
Now if my data warehouse or data mart follows most, nightly bulk load, then automated KPI reports (or similar) this is there any advantage with a product like this (ParAccel, Exasol)? I don’t know the answer but I am interested in knowing if you know.
WRT to the validity of TPC-H or conclusions drawn from them:
Whether or not you think TPC-H is valid or not, there are audited and validated metrics that are in the full disclosure reports that would probably allow you cross-check some of the metrics that you report on. For instance, in your post on TEOCO you wrote:
This full disclosure report shows that an Oracle database was able to load the data for entire 30TB scale factor (almost 260 billion rows) in just over 16 hours. Loading data is not rocket science, but it appears that with TEOCO there would appear that there was a bit of EBKAC going on. This would seem to be also confirmed from Paul’s comment. Would you agree Curt?
Re: EBKAC — well, it was a phone call. 🙂
But yeah, I’d say there was something quite confusing about how the statement was framed. With the numbers that far out of whack, the task has to have been something very different from what we commonly think of as “load”.
CAM
Re in-memory vs. cache:
I’m pretty sure that, say, Exasol, ParAccel, QlikView, and SAP BI Accelerator all do a lot better jobs than row-based DBMS’ caches do. Compression lets you put more in RAM. Convincing the cache to preload exactly what you want isn’t always as straightforward as running the right query at the right time. Etc.
CAM
Pentaho sells open source and OS-based business intelligence tools. They use the Mondrian ROLAP server, which relies on a back-end DBMS, but Pentaho does not itself provide database technology.
Therefore Pentaho literally not comparable to Exasol, ParAccel, Vertica. Pentaho is a different genre of product.
Seth. Thanks very much for having a great website. I got to know that Clareos Crosscut is in fact ParAccel Analytic database and as usual googled around and got a technical architecture doc for Crosscut. Amazing.But correct me if i am wrong. Now since i know a lot more about vertica or C-Store, i can truly compare vertica~C-Ctore with Paraccel~Clareos Crosscut. Regarding your comment on pentaho, that was a very generic question i asked to Curt and i know very well that Pentaho does not belong to this Genre of columnar DB. Again a very generic question. I am fascinated by Pentaho because of all the material that their website provided as well as its partnership with both Vertica and Paraccel.
My 2cents — Paraccel and Vertica are on a collision course because both are great products with superb engineering brains behind them. Now who will blink first? 😉
[…] Last spring, DATAllegro user John Devolites of TEOCO told me of troubles his firm had had loading CDRs (Call Detail Records) into Oracle, and how those had been instrumental in his eventual adoption of DATAllegro. That claim was contemptously challenged in a couple of comment threads. […]
[…] Автор: Curt Monash Дата публикации оригинала: 2008-08-12 Источник: Блог Курта Монаша […]
Interested to know more about your views on these columnar DB’s and SybaseIQ. Even though everyone bags Sybase, it seems some are holding tightly to SybaseIQ as they have had it for aeon’s in comparison to these new comers.