Software as a Service (SaaS)

Analysis of software-as-a-service offerings with a database or analytic focus, or data connectivity tools focused on SaaS. Related subjects include:

October 9, 2009

I have some presentations coming up (all on October Thursdays)

On Thursday, October 15, and two different times (10:00 am and 1:00 pm Eastern time), I’ll be giving a webinar for Aster Data on MapReduce. The content is very much work in progress, but it definitely will:

Then, on the evening of Thursday, October 22, there’s something called the Boston Big Data Summit, in Waltham, where “Big Data” evidently is to be construed as anything from a few terabytes on up.  (Things are smaller in the Northeast than in California …) It’s being put together by Amrith Kumar (who I don’t really know) and Bob Zurek (who everybody knows). This is the inaguaral meeting. It seems I’m both giving the keynote and running the subsequent panel, one of whose participants will be Ellen Rubin. Read more

July 29, 2009

What are the best choices for scaling Postgres?

March, 2011 edit: In its quaintness, this post is a reminder of just how fast Short Request Processing DBMS technology has been moving ahead.  If I had to do it all over again, I’d suggest they use one of the high-performance MySQL options like dbShards, Schooner, or both together.  I actually don’t know what they finally decided on in that area. (I do know that for analytic DBMS they chose Vertica.)

I have a client who wants to build a new application with peak update volume of several million transactions per hour.  (Their base business is data mart outsourcing, but now they’re building update-heavy technology as well. ) They have a small budget.  They’ve been a MySQL shop in the past, but would prefer to contract (not eliminate) their use of MySQL rather than expand it.

My client actually signed a deal for EnterpriseDB’s Postgres Plus Advanced Server and GridSQL, but unwound the transaction quickly. (They say EnterpriseDB was very gracious about the reversal.) There seem to have been two main reasons for the flip-flop.  First, it seems that EnterpriseDB’s version of Postgres isn’t up to PostgreSQL’s 8.4 feature set yet, although EnterpriseDB’s timetable for catching up might have tolerable. But GridSQL apparently is further behind yet, with no timetable for up-to-date PostgreSQL compatibility.  That was the dealbreaker.

The current base-case plan is to use generic open source PostgreSQL, with scale-out achieved via hand sharding, Hibernate, or … ??? Experience and thoughts along those lines would be much appreciated.

Another option for OLTP performance and scale-out is of course memory-centric options such as VoltDB or the Groovy SQL Switch.  But this client’s database is terabyte-scale, so hardware costs could be an issue, as of course could be product maturity.

By the way, a large fraction of these updates will be actual changes, as opposed to new records, in case that matters.  I expect that the schema being updated will be very simple — i.e., clearly simpler than in a classic order entry scenario.

July 7, 2009

Hasso Plattner calls for in-memory OLTP column stores

Former SAP CEO Hasso Plattner has written a paper called A Common Database Approach for OLTP and OLAP Using an In-Memory Column Database, in association with a SIGMOD keynote address.* The approach Plattner advocates is an MPP in-memory column store, presumably somewhat akin to SAP’s frequently renamed Business Warehouse Accelerator/Business Intelligence Accelerator/BWA/BIA/Son-of-TREX technology. There also are strong similarities to the MPP in-memory row store project H-Store/VoltDB, although I don’t know whether Plattner would go so far as to adopt the H-Store view that all transactions should run in stored procedures. Unsurprisingly, SAP applications are used as the OLTP paradigm throughout.

*Thanks to Dave Kellogg for tipping me off to Plattner’s paper. I only went to two SIGMOD sessions, neither of which was Plattner’s. Nobody actually mentioned Plattner’s talk to me when I was down at SIGMOD.

Perhaps the most interesting part is Plattner’s claim that what’s demanding about OLTP isn’t database updating per se, but rather maintaining aggregates for quick-response analytics. In his main example of that point, Plattner proposes a real-life “more than 18” table schema, of which 2 are base tables, and (most of?) the rest are materialized views that his proposed database architecture dispenses with (because analytic performance is sufficiently good without them). Thus, Plattner’s core columnar argument seemingly is

columnar –> natively fast analytics –> no need to maintain aggregates –> much lower update burden.

That said — if Plattner’s paper contained a clear statement of how much more expensive it is to insert or update a single row in a columnar vs. row-based system, I overlooked it. Instead, Plattner seems to be arguing that the volume of base-table updates is low enough that — whatever it may be — column-store update overhead is an acceptable price to pay.  (At one point he claims that only 5% of the data inserted in a financial application ever gets changed.) That may actually be true in a financial accounting system, but seems more questionable in a sufficiently large application that gets its updates from automatic devices, or from the consumer web.

Other highlights include: Read more

May 29, 2009

Sneakernet to the cloud

Recently, Amazon CTO Werner Vogels put up a blog post which suggested that, now and in the future, the best way to get large databases into the cloud is via sneakernet.  In some circumstances, he is surely right. Possible implications include:

But for one-time moves of data sets — sure, sneaker net/snail mail should work just fine.

April 3, 2009

Amazon Elastic MapReduce

Amazon is introducing a beta of Amazon Elastic MapReduce.  What it boils down to is cheap, on-demand Hadoop.

This seems like a great way to experiment with MapReduce and see if you like it. But for serious use, I don’t know why you wouldn’t prefer MapReduce more closely integrated into a DBMS.

April 1, 2009

April Fool’s Day highlights

Amazon says it’s taking “cloud” computing to new heights, as it were.

Derivative funds and large government-subsidized entities will be especially interested in FACE’s transmodal operation. They can allocate a dedicated FACE, load it up with data, and then send it out to sea to perform advanced processing in safety. The government will have absolutely no chance of acting against them, because they will be too busy trying to decide which Federal Air Regulation (FAR) was violated, not to mention scheduling news conferences.

First excellent April Fool’s joke I saw this year was from The Guardian.  The best so far is from Expedia.  Others are linked in my Twitter feed.  And personally, I’m encouraging the concept of April No-Fooling Day.

March 23, 2009

SAS in its own cloud

The Register has a fairly detailed article about SAS expanding its cloud/SaaS offerings.  I disagree with one part, namely:

SAS may not have a choice but to build its own cloud. Given the sensitive nature of the data its customers analyze, moving that data out to a public cloud such as the Amazon EC2 and S3 combo is just not going to happen.

And even if rugged security could make customers comfortable with that idea, moving large data sets into clouds (as Sun Microsystems discovered with the Sun Grid) is problematic. Even if you can parallelize the uploads of large data sets, it takes time.

But if you run the applications locally in the SAS cloud, then doing further analysis on that data is no big deal. It’s all on the same SAN anyway, locked down locally just as you would do in your own data center.

I fail to see why SAS’s campus would be better than leading hosting companies’ data centers for either of data privacy/security or data upload speed.  Rather, I think major reasons for SAS building its own data center for cloud computing probably focus on: Read more

February 12, 2009

IBM in the cloud

IBM is making DB2, Informix Dynamic Server, and other products available in the Amazon cloud.  The press release says test and development are free, while production will be charged at an hourly rate.  No doubt more price details will be forthcoming when the whole thing is fully in production.

Frankly, I’ve lost track of who all has some kind of cloud or SaaS offering now.  The list is at least Oracle, IBM, presumably Microsoft, MySQL (via Elastra, and also at almost every web host), PostgreSQL (ditto, more or less), EnterpriseDB, Kognitio, Vertica, Netezza, Aster Data.  No doubt I’m forgetting a bunch more.

February 10, 2009

Aster Data in the cloud

Aster Data is in the news, bragging about a cloud version of nCluster, and providing both a press release and a blog post on the subject. It seems there are three actual customers, two of which have been publicly named. One of them, ShareThis, is in production. (2 terabytes of data on 9 nodes, planning to scale to 10-18 TB on 24 or so nodes by year-end.) All seem to be doing something in the area of internet marketing, web analytics or otherwise — which makes sense, as the same could be said of almost all Aster customers overall. That said, it seems that these customers are doing their primary analytic processing remotely, which makes Aster’s experience in that regard more akin to Kognitio’s than to Vertica’s. Read more

January 12, 2009

Database SaaS gains a little visibility

Way back in the 1970s, a huge fraction of analytic database management was done via timesharing, specifically in connection with the RAMIS and FOCUS business-intelligence-precursor fourth-generation languages.  (Both were written by Gerry Cohen, who built his company Information Builders around the latter one.)  The market for remoting-computing business intelligence has never wholly gone away since. Indeed, it’s being revived now, via everything from the analytics part of Salesforce.com to the service category I call data mart outsourcing.

Less successful to date are efforts in the area of pure database software-as-a-service.  It seems that if somebody is going for SaaS anyway, they usually want a more complete, integrated offering. The most noteworthy exceptions I can think of to this general rule are Kognitio and Vertica, and they only have a handful of database SaaS customers each. To wit: Read more

← Previous PageNext Page →

Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.