Notes on HBase
I talked with a couple of Cloudera folks about HBase last week. Let me frame things by saying:
- The closest thing to an HBase company, ala MongoDB/MongoDB or DataStax/Cassandra, is Cloudera.
- Cloudera still uses a figure of 20% of its customers being HBase-centric.
- HBaseCon and so on notwithstanding, that figure isn’t really reflected in Cloudera’s marketing efforts. Cloudera’s marketing commitment to HBase has never risen to nearly the level of MongoDB’s or DataStax’s push behind their respective core products.
- With Cloudera’s move to “zero/one/many” pricing, Cloudera salespeople have little incentive to push HBase hard to accounts other than HBase-first buyers.
Also:
- Cloudera no longer dominates HBase development, if it ever did.
- Cloudera is the single biggest contributor to HBase, by its count, but doesn’t make a majority of the contributions on its own.
- Cloudera sees Hortonworks as having become a strong HBase contributor.
- Intel is also a strong contributor, as are end user organizations such as Chinese telcos. Not coincidentally, Intel was a major Hadoop provider in China before the Intel/Cloudera deal.
- As far as Cloudera is concerned, HBase is just one data storage technology of several, focused on high-volume, high-concurrency, low-latency short-request processing. Cloudera thinks this is OK because of HBase’s strong integration with the rest of the Hadoop stack.
- Others who may be inclined to disagree are in several cases doing projects on top of HBase to extend its reach. (In particular, please see the discussion below about Apache Phoenix and Trafodion, both of which want to offer relational-like functionality.)
Cloudera’s views on HBase history — in response to the priorities I brought to the conversation — include:
- HBase initially favored consistency over performance/availability, while Cassandra initially favored the opposite choice. Both products, however, have subsequently become more tunable in those tradeoffs.
- Cloudera’s initial contributions to HBase focused on replication, disaster recovery and so on. I guess that could be summarized as “scaling”.
- Hortonworks’ early HBase contributions included (but were not necessarily limited to):
- Making recovery much faster (10s of seconds or less, rather than minutes or more).
- Some of that consistency vs. availability tuning.
- “Coprocessors” were added to HBase ~3 years ago, to add extensibility, with the first use being in security/permissions.
- With more typical marketing-oriented version numbers:
- HBase .90, the first release that did a good job on durability, could have been 1.0.
- HBase .92 and .94, which introduced coprocessors, could have been Version 2.
- HBase .96 and .98 could have been Version 3.
- The recent HBase 1.0 could have been 4.0.
The HBase roadmap includes:
- A kind of BLOB/CLOB (Binary/Character Large OBject) support.
- Intel is heavily involved in this feature.
- The initial limit is 10 megabytes or so, due to some limitations in the API (I didn’t ask why that made sense). This happens to be all the motivating Chinese customer needs for the traffic photographs it wants to store.
- Various kinds of “multi-tenancy” support (multi-tenancy is one of those terms whose meaning is getting stretched beyond recognition), including:
- Mixed workload support (short-request and analytic) on the same nodes.
- Mixed workload support on different nodes in the same cluster.
- Security between different apps in the same cluster.
- (Still in the design phase) Bottleneck Whack-A-Mole, with goals including but not limited to:
- Scale-out beyond the current assumed limit of ~1200 nodes.
- More predictable performance, based on smaller partition sizes.
- (Possibly) Multi-data-center fail-over.
Not on the HBase roadmap per se are global/secondary indexes. Rather, we talked about projects on top of HBase which are meant to provide those. One is Apache Phoenix, which supposedly:
- Makes it simple to manage compound keys. (E.g., City/State/ZipCode)
- Provides global secondary indexes (but not in a fully ACID way).
- Offers some very basic JOIN support.
- Provides a JDBC interface.
- Offers efficiencies in storage utilization, scan optimizations, and aggregate calculations.
Another such project is Trafodion — supposedly the Welsh word for “transaction” — open sourced by HP. This seems to be based on NonStop SQL and Neoview code, which counter-intuitively have always been joined at the hip.
There was a lot more to the conversation, but I’ll stop here for two reasons:
- This post is pretty long already.
- I’m reserving some of the discussion until after I’ve chatted with vendors of other NoSQL systems.
Related link
- My July 2011 post on HBase offers context, as do the comments on it.
Comments
4 Responses to “Notes on HBase”
Leave a Reply
One minor point of clarification.
Roughly 20% of our customers are HBase-focused and buy support only for HBase and core Hadoop. So, 20% choose the “flex” aka just-one component support level.
Of the remaining 80% of customers, a similar percentage use HBase but in conjunction with other parts of the stack (Impala, Spark, etc) and thus choose the “EDH” or many component support level.
– Jon
[…] Continuing from last week’s HBase post, the Cloudera folks were fairly proud of HBase’s features for performance and scalability. […]
I would not ommit also Splice Machine from the list of relational DBMS on top of HBase. Mostly because they are claming to be
– fully ACID
– respecting ANSI SQL
– using MVCC
– highly concurential.
What I miss is how well they are aknowledged and applied in the context of their commercial offer.
>> A kind of BLOB/CLOB (Binary/Character Large OBject) support
That is because HDFS sucks at storing large number of small files. I am not the DB expert but think that storing blobs are usually delegated to a file system.
Its not BLOB its BMOB actually (binary medium objects) to support storing digital pictures (not digital movies). This could be done w/o hacking HBase from inside – my humble opinion, but nevertheless, there is a strong support to include this feature into HBase.