Parallelization
Analysis of issues in parallel computing, especially parallelized database management. Related subjects include:
Hadoop evolution
I wanted to learn more about Hadoop and its futures, so I talked Friday with Arun Murthy of Hortonworks.* Most of what we talked about was:
- NameNode evolution, and the related issue of file-count limitations.
- JobTracker evolution.
Arun previously addressed these issues and more in a June slide deck.
Read more
Categories: Hadoop, MapReduce, Parallelization, Workload management, Yahoo | 7 Comments |
Couchbase technical update
My Couchbase business update with Bob Wiederhold was very interesting, but it didn’t answer much about the actual Couchbase product. For that, I talked with Dustin Sallings. We jumped around a lot, and some important parts of the Couchbase product haven’t had their designs locked down yet anyway. But here’s at least a partial explanation of what’s up.
memcached is a way to cache data in RAM across a cluster of servers and have it all look logically like a single memory pool, extremely popular among large internet companies. The Membase product — which is what Couchbase has been selling this year — adds persistence to memcached, an obvious improvement on requiring application developers to write both to memcached and to non-transparently-sharded MySQL. The main technical points in adding persistence seem to have been:
- A persistent backing store (duh), namely SQLite.
- A change to the hashing algorithm, to avoid losing data when the cluster configuration is changed.
Couchbase is essentially Membase improved by integrating CouchDB into it, with the main changes being:
- Changing the backing store to CouchDB (duh). This will be in the first Couchbase release.
- Adding cross data center replication on CouchDB’s consistency model. This will not, I believe, be in the first Couchbase release.
- Offering CouchDB’s programming and query interfaces as an option. So far as I can tell, this will be implemented straightforwardly in the first Couchbase release, with elegance planned for later down the road.
Let’s drill down a bit into Membase/Couchbase clustering and consistency. Read more
Categories: Cache, Clustering, Couchbase, memcached, Memory-centric data management, MySQL, Parallelization, Solid-state memory | 8 Comments |
Introduction to Zettaset
Zettaset is confusing, but as best I understand:
- Zettaset sells Hadoop add-on/enhancement software, with what might be called an enterprise-friendly Hadoop management focus.
- “Business intelligence” gets mentioned prominently in Zettaset’s marketing, but not in what executives now say. Apparently the BI focus is old news, predating a hard pivot.
- Zettaset’s marketing also mentions NoSQL, for little reason that I can discern, except insofar as Zettaset relies on HBase.
- CEO Brian Christian told me that Zettaset has been around since December, 2007; on the other hand, Zettaset press release boilerplate says Zettaset was founded in 2009. Apparently, the distinction is that Zettaset was founded in 2007 as a consultancy, but turned its efforts to software development in 2009.
- Zettaset has fewer than 20 people but is “hiring like mad.”
- Zettaset just did a $3 million Series A round — or maybe is just announcing it now; the latter interpretation might explain how those 20ish people are getting paid.
- Zettaset’s product was just launched and made generally available, notwithstanding that Version 2 of Zettaset’s product was shipped last year to fanfare on Zettaset’s blog.
- Zettaset’s pricing is based on how many terabytes of compressed data it is being used to manage.
- Until very recently, Zettaset was called GOTO Metrics; I imagine the name change is connected to the strategy pivot.
- Zettaset told me of one big customer — with an almost-petabyte Hadoop cluster before compression — namely Zions Bancorporation.
- Zettaset has “a number of paying customers” overall.
Categories: Hadoop, HBase, MapReduce, Market share and customer counts, Specific users, Zettaset | 4 Comments |
An odd claim attributed to Mike Stonebraker
This post has a sequel.
Last week, Mike Stonebraker insulted MySQL and Facebook’s use of it, by implication advocating VoltDB instead. Kerfuffle ensued. To the extent Mike was saying that non-transparently sharded MySQL isn’t an ideal way to do things, he’s surely right. That still leaves a lot of options for massive short-request databases, however, including transparently sharded RDBMS, scale-out in-memory DBMS (whether or not VoltDB*), and various NoSQL options. If nothing else, Couchbase would seem superior to memcached/non-transparent MySQL if you were starting a project today.
*The big problem with VoltDB, last I checked, was its reliance on Java stored procedures to get work done.
Pleasantries continued in The Register, which got an amazing-sounding quote from Mike. If The Reg is to be believed — something I wouldn’t necessarily take for granted — Mike claimed that he (i.e. VoltDB) knows how to solve the distributed join performance problem. Read more
Categories: Cache, Clustering, Couchbase, Games and virtual worlds, In-memory DBMS, memcached, Michael Stonebraker, MySQL, Parallelization, Theory and architecture, VoltDB and H-Store | 20 Comments |
Hadoop futures and enhancements
Hadoop is immature technology. As such, it naturally offers much room for improvement in both industrial-strengthness and performance. And since Hadoop is booming, multiple efforts are underway to fill those gaps. For example:
- Cloudera’s proprietary code is focused on management, set-up, etc.
- The “Phase 1” plans Hortonworks shared with me for Apache Hadoop are focused on industrial-strengthness, as are significant parts of “Phase 2”.*
- MapR tells a performance story versus generic Apache Hadoop HDFS and MapReduce. (One aspect of same is just C++ vs. Java.)
- So does Hadapt, but mainly vs. Hive.
- Cloudera also tells me there’s a potential 4-5X performance improvement in Hive coming down the pike from what amounts to an optimizer rewrite.
(Zettaset belongs in the discussion too, but made an unfortunate choice of embargo date.)
Categories: Cloudera, Greenplum, Hadapt, Hadoop, HBase, MapR, MapReduce, Parallelization, Zettaset | 20 Comments |
Cloudera and Hortonworks
My clients at Cloudera have been around for a while, in effect positioned as “the Hadoop company.” Their business, in a nutshell, consists of:
- Packaging up a Cloudera distribution of Apache Hadoop. This distribution doesn’t have proprietary code; it’s just packaged by Cloudera from Apache projects (with a decent minority of the code happening to have been contributed by Cloudera engineers).
- Paid subscription support for Apache Hadoop and, in connection with that …
- … proprietary software that all support customers automatically get. There are two points to this proprietary software:
- It adds value for the customer.
- It makes Cloudera’s support job easier.
- Professional services around Hadoop.
- Training and conferences around Hadoop, which probably don’t generate all that much money, but are great marketing in terms of visibility, thought leadership, and lead generation.
Hortonworks spun out of Yahoo last week, with parts of the Cloudera business model, namely Hadoop support, training, and I guess conferences. Hortonworks emphatically rules out professional services, and says that it will contribute all code back to Apache Hadoop. Hortonworks does grudgingly admit that it might get into the proprietary software business at some point — but evidently hopes that day will never actually come.
Categories: Cloudera, Hadoop, Hortonworks, IBM and DB2, MapReduce, Open source, Yahoo | 9 Comments |
Sybase IQ soundbites
Sybase made a total hash of the timing of this week’s press release. I got annoyed after they promised to inform me of the new embargo time, then broke the promise. Other people got annoyed earlier than that.
So be it. Below is the draft of a post I was holding, with brackets added around one word that is no longer accurate.
I don’t write enough about Sybase IQ. That said, I offered a couple of quotes to a reporter [yesterday] in connection with the general availability of Sybase IQ 15.3. Lightly edited, they go:
- “Shared-everything MPP” isn’t a total contradiction in terms. It’s great for adding in concurrent users. And there’s little doubt that Sybase IQ can support robust access to databases 10s of terabytes in size.
- As I first noted a couple of years ago, virtual data marts are a good idea. Too few vendors are making it easy to spin them out. They let departments start doing analytics very quickly, yet allow IT to keep partial control.
Beyond that, I should note:
- Sybase IQ is the classic choice for what I call traditional data marts.
- Sybase IQ is a leader in temporal functionality, which is not coincidental to its presence in the financial services market.
Categories: Columnar database management, Data warehousing, Parallelization, Sybase, Theory and architecture | Leave a Comment |
Hadapt update
I met with the Hadapt guys today. I think I can be a bit crisper than before in positioning Hadapt and its use cases, namely:
- Hadapt is additional software on a cluster that also runs fully functional Hadoop/HDFS. (Cloudera Hadoop more than straight-from-Apache Hadoop to date, but that’s not a requirement.)
- The cluster also runs a DBMS on every node, such as PostgreSQL or one of Infobright/Vectorwise.
- Hadapt’s software manages parallel SQL queries by distributing them to the DBMS living on each node. Hadapt says that the resulting query performance far outshines Hive’s.
- Hadapt further says that, by exploiting the partner DBMS, its SQL functionality outpaces Hive’s as well.
- Target Hadapt use cases are centered around keeping machine-generated or other poly-structured data in Hadoop, and extracting, enhancing, or otherwise deriving some of it to live in the relational store.
- In particular, Hadapt seems like an interesting choice when you want to use that relational data as you work on other data that’s still in HDFS, or if you want to keep using the relational data in other kinds of MapReduce jobs.
- That all fits well with my thoughts about the importance of derived data.
Other evolution from what I wrote about Hadapt a few months ago includes:
- Hadapt is in beta now.
- Hadapt has added adult supervision in the form of Philip Wickline, late of Endeca.
In other news, Hadapt is our newest client.
Petabyte-scale Hadoop clusters (dozens of them)
I recently learned that there are 7 Vertica clusters with a petabyte (or more) each of user data. So I asked around about other petabyte-scale clusters. It turns out that there are several dozen such clusters (at least) running Hadoop.
Cloudera can identify 22 CDH (Cloudera Distribution [of] Hadoop) clusters holding one petabyte or more of user data each, at 16 different organizations. This does not count Facebook or Yahoo, who are huge Hadoop users but not, I gather, running CDH. Meanwhile, Eric Baldeschwieler of Hortonworks tells me that Yahoo’s latest stated figures are:
- 42,000 Hadoop nodes …
- … holding 180-200 petabytes of data.
Eight kinds of analytic database (Part 2)
In Part 1 of this two-part series, I outlined four variants on the traditional enterprise data warehouse/data mart dichotomy, and suggested what kinds of DBMS products you might use for each. In Part 2 I’ll cover four more kinds of analytic database — even newer, for the most part, with a use case/product short list match that is even less clear. Read more