Vertica Systems
Analysis of columnar data warehouse DBMS vendor Vertica Systems. Related subjects include:
Some big-vendor execution questions, and why they matter
When I drafted a list of key analytics-sector issues in honor of look-ahead season, the first item was “execution of various big vendors’ ambitious initiatives”. By “execute” I mean mainly:
- “Deliver products that really meet customers’ desires and needs.”
- “Successfully convince them that you’re doing so …”
- “… at an attractive overall cost.”
Vendors mentioned here are Oracle, SAP, HP, and IBM. Anybody smaller got left out due to the length of this post. Among the bigger omissions were:
- salesforce.com (multiple subjects).
- SAS HPA.
- The evolution of Hadoop.
Analytic trends in 2012: Q&A
As a new year approaches, it’s the season for lists, forecasts and general look-ahead. Press interviews of that nature have already begun. And so I’m working on a trilogy of related posts, all based on an inquiry about hot analytic trends for 2012.
This post is a moderately edited form of an actual interview. Two other posts cover analytic trends to watch (planned) and analytic vendor execution challenges to watch (already up).
Vertica Community Edition
The press release announcing Vertica’s Community Edition is a bit vague. And indeed, much of what I know about Vertica Community Edition is along the lines of “This is what I think will happen, but of course it could still change.” That said, I believe:
- Vertica Community Edition has all of regular Vertica’s features. However …
- … HP Vertica reserves the right to open a feature gap in future releases.
- The license restriction on Vertica Community Edition is that you’re limited to 1 terabyte of data, and 3 nodes. I imagine that’s for one production copy, and you’re perfectly free to also set up mirrors for test, development, disaster recovery, and so on. However …
- … HP Vertica would be annoyed if you stuck a free copy of Vertica on each of 50 nodes and managed the whole thing via, say, Hadapt.
- HP Vertica plans to be very generous with true academic researchers, suspending or waiving limits on database size and node count. Not coincidentally, Vertica Community Edition is being announced at XLDB, where Vertica is also a top-level sponsor. (I introduced Vertica and XLDB’s Jacek Becla to each other as soon as I heard about Vertica’s Community Edition plans.)
- The only support available for Vertica Community Edition is through forums. This could change.
I’m a big supporter of the Vertica Community Edition idea, for four reasons:
- It should now be easier to download and evaluate Vertica.
- Vertica Community Edition could be a big help to academic researchers.
- Vertica could now be more appealing to some of the “Omigod, we’re outgrowing Oracle Standard Edition and we don’t want to pay up for Oracle Enterprise Edition/Exadata” crowd.
- People are under the impression that what Vertica actually charges today resembles its long-ago list prices. This announcement may help puncture Vertica’s outdated pricing image.
Categories: Pricing, Vertica Systems | 7 Comments |
HP systems soundbites
It is widely rumored that there will be a leadership change at HP (Meg Whitman in, Leo Apotheker out). In connection with that, I found myself holding forth on points such as:
- HP needs to make outstanding enterprise systems again.
- They fell away from that target under Mark Hurd, but they surely can hit it again, based on the remnants of DEC (Digital Equipment Corporation), Tandem, the higher-end part of Compaq, and of course the original HP systems group.
- In particular:
- Rumors say that Oracle Exadata 1 boxes, made by HP, were much lower quality than Exadata 2 boxes made by Sun.
- HP Neoview was a waste of good engineering talent.
- I’d like to see a few excellent Vertica appliances.
- I hope the SAP HANA appliances go well, whenever HANA finally becomes a serious product.
- The general move from disk to solid-state memory should offer some opportunities.
Categories: Exadata, HP and Neoview, SAP AG, Solid-state memory, Vertica Systems | Leave a Comment |
Vertica projections — an overview
Partially at my suggestion, Vertica has blogged a three–part series explaining the “projections” that are central to a Vertica database. This is important, because in Vertica projections play the roles that in many analytic DBMS might be filled by base tables, indexes, AND materialized views. Highlights include:
- A Vertica projection can contain:
- All the columns in a table.
- Some of the columns in a table.
- A prejoin among tables.
- Vertica projections are updated and maintained just as base tables are. (I.e., there’s no kind of batch lag.)
- You can import the same logical schema you use elsewhere. Vertica puts no constraints on your logical schema. Note: Vertica has been claiming good support for all logical schemas since Vertica 4.0 came out in early 2010.
- Vertica (the product) will automatically generate a physical schema for you — i.e. a set of projections — that Vertica (the company) thinks will do a great job for you. Note: That also dates back to Vertica 4.0.
- Vertica claims that queries are very fast even when you haven’t created projections explicitly for them. Note: While the extent to which this is true may be a matter of dispute, competitors clearly overreach when they make assertions like “every major Vertica query needs a projection prebuilt for it.”
- On the other hand, it is advisable to build projections (automatically or manually) that optimize performance of certain parts of your query load.
The blog posts contain a lot more than that, of course, both rah-rah and technical detail, including reminders of other Vertica advantages (compression, no logging, etc.). If you’re interested in analytic DBMS, they’re worth a look.
Data management at Zynga and LinkedIn
Mike Driscoll and his Metamarkets colleagues organized a bit of a bash Thursday night. Among the many folks I chatted with were Ken Rudin of Zynga, Sam Shah of LinkedIn, and D. J. Patil, late of LinkedIn. I now know more about analytic data management at Zynga and LinkedIn, plus some bonus stuff on LinkedIn’s People You May Know application. 🙂
It’s blindingly obvious that Zynga is one of Vertica’s petabyte-scale customers, given that Zynga sends 5 TB/day of data into Vertica, and keeps that data for about a year. (Zynga may retain even more data going forward; in particular, Zynga regrets ever having thrown out the first month of data for any game it’s tried to launch.) This is game actions, for the most part, rather than log files; true logs generally go into Splunk.
I don’t know whether the missing data is completely thrown away, or just stashed on inaccessible tapes somewhere.
I found two aspects of the Zynga story particularly interesting. First, those 5 TB/day are going straight into Vertica (from, I presume, memcached/Membase/Couchbase), as Zynga decided that sending the data to some kind of log first was more trouble than it’s worth. Second, there’s Zynga’s approach to analytic database design. Highlights of that include: Read more
Categories: Aster Data, Couchbase, Data models and architecture, Games and virtual worlds, Greenplum, Hadoop, Petabyte-scale data management, Specific users, Vertica Systems, Zynga | 27 Comments |
HP/Autonomy sound bites
HP has announced that:
- HP is buying Autonomy.
- HP is pulling back from WebOS.
- HP may spin off its PC business altogether.
On a high level, this means:
- HP is doubling down on enterprise IT.
- HP is taking a more software-centric approach to the enterprise IT business.
- HP is backing away from the consumer electronics business.
- HP in particular is backing away from the generic desktop/laptop PC business, which may with only moderate exaggeration be regarded as:
- The intersection of the enterprise IT and consumer electronics businesses.
- The least attractive sector of each.
My coverage of Autonomy isn’t exactly current, but I don’t know of anything that contradicts long-time competitor* Dave Kellogg’s skeptical view of Autonomy. Autonomy is a collection of businesses involved in the management, search, and retrieval of poly-structured data, in some cases with strong market share, but even so not necessarily with the strongest of reputations for technology or technology momentum. Autonomy started from a text search engine and a Bayesian search algorithm on top of that, which did a decent job for many customers. But if there’s been much in the way of impressive enhancement over the past 8-10 years, I’ve missed the news.
*Dave, of course, was CEO of MarkLogic.
Questions obviously arise about how the Autonomy acquisition relates to other HP businesses. My early thoughts include: Read more
Categories: HP and Neoview, Market share and customer counts, Structured documents, Text, Vertica Systems | 10 Comments |
Hadoop hardware and compression
A month ago, I posted about typical Hadoop hardware. After talking today with Eric Baldeschwieler of Hortonworks, I have an update. I also learned some things from Eric and from Brian Christian of Zettaset about Hadoop compression.
First the compression part. Eric thinks 6-10X compression is common for “curated” Hadoop data — i.e., the data that actually gets used a lot. Brian used an overall figure of 6-8X, and told of a specific customer who had 6X or a little more. By way of comparison, it sounds as if the kinds of data involved are like what Vertica claimed 10-60X compression for almost three years ago.
Eric also made an excellent point about low-value machine-generated data. I was suggesting that as Moore’s Law made sensor networks ever more affordable: Read more
Categories: Cloudera, Database compression, Hadoop, Hortonworks, Storage, Vertica Systems, Zettaset | 10 Comments |
Eight kinds of analytic database (Part 2)
In Part 1 of this two-part series, I outlined four variants on the traditional enterprise data warehouse/data mart dichotomy, and suggested what kinds of DBMS products you might use for each. In Part 2 I’ll cover four more kinds of analytic database — even newer, for the most part, with a use case/product short list match that is even less clear. Read more
Eight kinds of analytic database (Part 1)
Analytic data management technology has blossomed, leading to many questions along the lines of “So which products should I use for which category of problem?” The old EDW/data mart dichotomy is hopelessly outdated for that purpose, and adding a third category for “big data” is little help.
Let’s try eight categories instead. While no categorization is ever perfect, these each have at least some degree of technical homogeneity. Figuring out which types of analytic database you have or need — and in most cases you’ll need several — is a great early step in your analytic technology planning. Read more