Business intelligence
Analysis of companies, products, and user strategies in the area of business intelligence. Related subjects include:
- Data warehousing
- Business Objects
- Cognos
- QlikTech
- (in Text Technologies) Text mining
- (in Text Technologies) Text analytics/business intelligence integration
- (in The Monash Report) Strategic issues in business intelligence
- (in Software Memories) Historical notes on business intelligence
Notes on Datameer
In a short October, 2011 post about Datameer, I wrote:
Datameer is designed to let you do simple stuff on large amounts of data, where “large amounts of data” typically means data in Hadoop, and “simple stuff” includes basic versions of a spreadsheet, of BI, and of EtL (Extract/Transform/Load, without much in the way of T).
That’s all still mainly true, although with the recent Datameer 2.0:
- You can run Datameer and the underlying Hadoop on a desktop or workgroup group.
- There are some infographics pretty-picture-drawing capabilities, which will surely delight those who like vector-based HTML 5 pictures of coffee cups, saucers and macaroons.
- No doubt Datameer has been generally enhanced on multiple fronts.
In essence, Datameer has two positionings.
- One is “OK, you’ve got Hadoop — now wouldn’t you like to do something useful with it?” That can include both business intelligence and ETL.
- Beyond that, Datameer founder/CEO Stefan Groschupf’s core argument is that schema-on-read is really, really useful, even at the cost of absorbing a potentially large performance hit. In other words, he’s making a case for a form of non-relational BI.
Categories: Business intelligence, Data models and architecture, Datameer, EAI, EII, ETL, ELT, ETLT, Hadoop, Log analysis, Market share and customer counts, Web analytics | 8 Comments |
Five different kinds of business intelligence
Having recently categorized seven different kinds of database, let me now make a similar effort for business intelligence. To a first approximation, I’d like to split BI use cases into 2×2 = 4 groups, along two dimensions:
- Is there an operational business process involved?
- Is there a focus on root cause analysis?
That could lead to the categories:
- Both operational and root-cause: Real-time monitoring (or more precisely human real-time).
- Operational but not root-cause: Operational BI without that monitoring aspect — e.g., checking whether an expense submission makes sense.
- Root-cause but not operational: Investigative/exploratory BI.
- Neither operational nor root-cause: Monitoring BI without an immediate operational aspect — e.g., checking the dashboard periodically.
Those, in turn, could be more descriptively named:
- Real-time (preferably with a modifier such as “quasi” or “human”).
- Tactical (or task-oriented/-centric).
- Investigative (or exploratory).
- Traditional BI.
To complete the list, I’ll add a fifth category, as explained below.
Categories: Business intelligence | 11 Comments |
Approximate query results
In theory:
- A database query is a predicate.
- A DBMS matches the data it manages against the predicate and send back those records for which the predicate is true.
And so it would seem that query results always have to be exact. Even so, there are at least four different practical scenarios in which query results can reasonably be regarded as approximate, each associated with query languages that can supersede standard set-theoretic SQL.
Actually, there’s a fifth, and it’s a huge one — some fraction of your data is just plain wrong. But that’s not what this post is about.
First, some queries don’t have binary results, even in principle. Notably, text queries are answered via relevancy rankings, which fit badly into the relational model.
Second — and this can be combined with the first — you might want to generalize the query to look for partial matches. For example, Yarcdata suggested to me a scenario in which:
- You do a SPARQL query.
- You modify the query to accept results higher up in the taxonomy. (Which is likely to be possible, because where there’s SPARQL, there’s apt to be a taxonomy as well.) For example, if you really want to query on two people living in the house, you might extend the query to cover two people connected by any kind of address or building.
Similarly, if you’re looking for geographic proximity, it’s common to extend the allowed radius to fish for more results. Or one can walk up the hierarchy in a dimensional model.
Third, sometimes you just don’t have the data for any kind of precise answer at all. One adaptation I’ve mentioned before is to interpolate time series with synthetic data, and send back “precise” results based on that. In the same post I mentioned the Vertica “range join”, wherein users deliberately throw away part of their data — only storing the range it was in — and then join accordingly.
As Donald Rumsfeld might have said — and would have done well to reflect upon — you go into decision-making with the data you have, not the data you wish you had.
Finally, sometimes there’s a precise answer in principle, but for performance reasons you accept an approximate one, at least to start with. Numerous companies have told me stories around this, including:
- Infobright, whose “Rough Query” gives fast approximate results to a broad range of queries.
- Metamarkets, which does fast cardinality estimates via HyperLogLog.
- Aster Data, which was the first company to point out to me that median, decile, quintile, and so on calculations are a lot faster in a shared-nothing setting if you’re willing to settle for approximate results.
The latter two categories led me to ask vendors how customers actually make use of their exotic SQL capabilities. Answers boiled down to:
- (Always) Well, there’s a lot of custom coding.
- (Sometimes) We’re working with partner BI vendors to make direct use of the capabilities, but that’s not done yet, so it’s too early to talk about any details.
Perhaps the answers will never get much better; it’s tough to get packaged software vendors to support vendor-specific SQL, unless the vendor is Oracle. Even so, we’re seeing ever more ways in which conventional SQL DBMS are being superseded by data management and analytic alternatives.
Categories: Aster Data, Business intelligence, Data models and architecture, Data warehousing, Database compression, Infobright, Text, Vertica Systems, Yarcdata and Cray | 3 Comments |
How important is BI flexibility?
How flexible does business intelligence technology need to be? Should it allow fully flexible ad-hoc data analysis, or does that overwhelm users? Are they perhaps happier with simpler, more prescriptive analytic paths? My answer is a resounding “It depends”.
On the one hand, it’s clear that some users really care about business intelligence flexibility. They don’t want the “right” dimensional hierarchy, carefully worked out in advance. They don’t even want fixed drilldown paths smartly calculated on the fly, ala’ Endeca (which, after all, ultimately didn’t succeed). Rather, they want to be able to truly choose aggregations and roll-ups for themselves.
Supporting this view is the rise of in-memory business intelligence. For example:
- SAP HANA is selling in impressive quantities.
- Further, HANA and alternatives are generating a lot of buzz. For example:
- Multiple clients have asked me for help positioning their products against HANA and Exalytics.
- Kognitio’s pretense to be HANA-like is getting them some sales too.
- QlikView has had considerable success.
But why would anybody pay up for the speed of in-memory BI? Analytic RDBMS offer blazing speed for broad ranges of queries. Parameterized reports let you do drilldowns in memory. So only if you need great flexibility do you need to keep a whole analytic data set permanently in RAM.
Categories: Business intelligence, Market share and customer counts, Memory-centric data management, PivotLink, Teradata | Leave a Comment |
Introduction to Metamarkets and Druid
I previously dropped a few hints about my clients at Metamarkets, mentioning that they:
- Have built vertical-market analytic platform technology.
- Use a lot of Hadoop.
- Throw good parties. (That’s where the background photo on my Twitter page comes from.)
But while they’re a joy to talk with, writing about Metamarkets has been frustrating, with many hours and pages of wasted of effort. Even so, I’m trying again, in a three-post series:
- Introduction to Metamarkets and Druid (this post)
- Druid overview
- Metamarkets’ back-end technology
Much like Workday, Inc., Metamarkets is a SaaS (Software as a Service) company, with numerous tiers of servers and an affinity for doing things in RAM. That’s where most of the similarities end, however, as Metamarkets is a much smaller company than Workday, doing very different things.
Metamarkets’ business is SaaS (Software as a Service) business intelligence, on large data sets, with low latency in both senses (fresh data can be queried on, and the queries happen at RAM speed). As you might imagine, Metamarkets is used by digital marketers and other kinds of internet companies, whose data typically wants to be in the cloud anyway. Approximate metrics for Metamarkets (and it may well have exceeded these by now) include 10 customers, 100,000 queries/day, 80 billion 100-byte events/month (before summarization), 20 employees, 1 popular CEO, and a metric ton of venture capital.
To understand how Metamarkets’ technology works, it probably helps to start by realizing: Read more
Workday update
In August 2010, I wrote about Workday’s interesting technical architecture, highlights of which included:
- Lots of small Java objects in memory.
- A very simple MySQL backing store (append-only, <10 tables).
- Some modernistic approaches to application navigation.
- A faceted approach to BI.
I caught up with Workday recently, and things have naturally evolved. Most of what we talked about (by my choice) dealt with data management, business intelligence, and the overlap between the two.
It is now reasonable to say that Workday’s servers fall into at least seven tiers, although we talked mainly about five that work together as a kind of giant app/database server amalgamation. The three that do noteworthy data management can be described as:
- In-memory objects and transactions. This is similar to what Workday had before.
- Persistent MySQL. Part of this is similar to what Workday had before. In addition, Workday is now storing certain data in tables in the ordinary relational way.
- In-memory caching and indexing. This has three aspects:
- Indexes for the ordinary relational tables, organized in interesting ways.
- Indexes for Workday’s search-box navigation (as per my original Workday technical post, you can search across objects, task-names, etc.).
- Compressed copies of the Java objects, used to instantiate other servers as needed. The most obvious uses of this are:
- Recovery for the object/transaction tier.
- Launch for the elastic compute tier. (Described below.)
Two other Workday server tiers may be described as: Read more
QlikTech bought Expressor
QlikTech has bought Expressor. Notes on that include:
- Expressor wanted to offer data integration/ETL (Extract/Transform/Load) that was all things to all people — great parallel performance, great UI, great price, etc.
- In practice, Expressor seemed to focus on cheap/easy ETL in the Microsoft Windows (I mean server) market.
- Expressor never got much traction. This seems confirmed by the “more than 20” figure for headcount mentioned in the acquisition press release.
- Both the press release and some tweets by QlikTech’s Donald Farmer seem to confirm that Expressor is being taken off the market for “boil the ocean” ETL. It will be companion technology to/integrated technology in QlikView.
- Unsurprisingly, Donald indicated that Expressor technology would expand past its Microsoft focus. (Edit: “If needed”)
Categories: Business intelligence, EAI, EII, ETL, ELT, ETLT, Expressor, Pricing, QlikTech and QlikView | 5 Comments |
Quick-turnaround predictive modeling
Last November, I wrote two posts on agile predictive analytics. It’s time to return to the subject. I’m used to KXEN talking about the ability to do predictive modeling, very quickly, perhaps without professional statisticians; that the core of what KXEN does. But I was surprised when Revolution Analytics told me a similar story, based on a different approach, because ordinarily that’s not how R is used at all.
Ultimately, there seem to be three reasons why you’d want quick turnaround on your predictive modeling: Read more
Categories: Business intelligence, Investment research and trading, KXEN, Predictive modeling and advanced analytics, Revolution Analytics, Telecommunications, Web analytics | 10 Comments |
Big Data hype?
A reporter wrote in to ask whether investor interest in “Big Data” was justified or hype. (More precisely, that’s how I reinterpreted his questions. 🙂 ) His examples were Splunk’s IPO, Teradata’s stock price increase, and Birst’s financing. In a nutshell:
- My comments, lightly edited, are in plain text below.
- Further thoughts are in italics.
- Of course I also linked him to my post “Big Data” has jumped the shark.
- Overall, my responses boil down to “Of course there’s some hype.”
1. A great example of hype is that anybody is calling Birst a “Big Data” or “Big Data analytics” company. If anything, Birst is a “little data” analytics company that claims, as a differentiating feature, that it can handle ordinary-sized data sets as well. Read more
Categories: Business intelligence, Data warehousing, IBM and DB2, Microsoft and SQL*Server, Oracle, Splunk | 14 Comments |
Many kinds of memory-centric data management
I’m frequently asked to generalize in some way about in-memory or memory-centric data management. I can start:
- The desire for human real-time interactive response naturally leads to keeping data in RAM.
- Many databases will be ever cheaper to put into RAM over time, thanks to Moore’s Law. (Most) traditional databases will eventually wind up in RAM.
- However, there will be exceptions, mainly on the machine-generated side. Where data creation and RAM data storage are getting cheaper at similar rates … well, the overall cost of RAM storage may not significantly decline.
Getting more specific than that is hard, however, because:
- The possibilities for in-memory data storage are as numerous and varied as those for disk.
- The individual technologies and products for in-memory storage are much less mature than those for disk.
- Solid-state options such as flash just confuse things further.
Consider, for example, some of the in-memory data management ideas kicking around. Read more