Human real-time
I first became an analyst in 1981. And so I was around for the early days of the movement from batch to interactive computing, as exemplified by:
- The rise of minicomputers as mainframe alternatives (first VAXen, then the ‘nix systems that did largely supplant mainframes).
- The move from batch to interactive computing even on mainframes, a key theme of 1980s application software industry competition.
Of course, wherever there is interactive computing, there is a desire for interaction so fast that users don’t notice any wait time. Dan Fylstra, when he was pitching me the early windowing system VisiOn, characterized this as response so fast that the user didn’t tap his fingers waiting.* And so, with the move to any kind of interactive computing at all came a desire that the interaction be quick-response/low-latency. Read more
CodeFutures/dbShards update
I’ve been talking a fair bit with Cory Isaacson, CEO of my client CodeFutures, which makes dbShards. Business notes include:
- 7 production users, plus an 8th imminent.
- 12-14 signed contracts beyond that.
- ~160 servers in production.
- One customer who has almost 15 terabytes of data (in the cloud).
- Still <10 people, pretty much all engineers.
- Profitable, but looking to raise a bit of growth capital.
Apparently, the figure of 6 dbShards customers in July, 2010 is more comparable to today’s 20ish contracts than to today’s 7-8 production users. About 4 of the original 6 are in production now.
NDA stuff aside, the main technical subject we talked about is something Cory calls “relational sharding”. The point is that dbShards’ transparent sharding can be done in such a way as to make many joins be single-server. Specifically:
- When a table is sufficiently small to be replicated in full at every nodes, you can join on it without moving data across the network.
- When two tables are sharded on the same key, you can join on that key without moving data across the network.
dbShards can’t do cross-shard joins, but it can do distributed transactions comprising multiple updates. Cory argues persuasively that in almost all cases this is enough; but I see cross-shard joins as a feature that should someday be added to dbShards even so.
The real issue with dbShards’ transparent sharding is ensuring it’s really transparent. Cory regards as typical a customer with a couple thousand tables, who had to change a dozen or so SQL statements to implement dbShards. But there are near-term plans to automate the number of SQL changes needed down to 0. The essence of that change is this: Read more
Categories: Clustering, Data integration and middleware, Data models and architecture, dbShards and CodeFutures, Market share and customer counts, Transparent sharding | 1 Comment |
Notes on the ClearStory Data launch, including an inaccurate quote from me
ClearStory Data launched, with nice coverage in the New York Times, Computerworld, and elsewhere. But from my standpoint, there were some serious problems:
- (Bad.) I was planning to cover the launch as well, in a split exclusive, but that plan was changed, costing me considerable wasted work.
- (Worse.) I wasn’t told of the change as soon as it was known. Indeed, I wasn’t told at all; I was left to infer it from the fact that I was now being asked to talk with other reporters.
- (Horrific.) I was quoted in the ClearStory launch press release, but while the sentiments were reasonably in line with my own, the quote was incorrect.*
I’m utterly disgusted with this whole mess, although after talking with her a lot I’m fine with CEO Sharmila Mulligan’s part in it, which is to say with ClearStory’s part in general.
*I avoid the term “platform” as much as possible; indeed, I still don’t really know what the “new platforms” part was supposed to refer to. The Frankenquote wound up with some odd grammar as well.
Actually, in principle I’m a pretty close adviser to ClearStory (for starters, they’re one of my stealth-mode clients). That hasn’t really ramped up yet; in particular, I haven’t had a technical deep dive. So for now I’ll just say:
Categories: Business intelligence, ClearStory Data, Data integration and middleware, Data mart outsourcing | 1 Comment |
DataStax Enterprise 2.0
Edit: Multiple errors in the post below have been corrected in a follow-on post about DataStax Enterprise and Cassandra.
My client DataStax is announcing DataStax Enterprise 2.0. The big point of the release is that there’s a bunch of stuff integrated together, including at least:
- Cassandra — the NoSQL DBMS, which DataStax sometimes calls “DataStax Server”. Edit: That’s not really a fair criticism of DataStax’s messaging.
- Hadoop MapReduce, which DataStax sometimes calls “Hadoop”. Edit: That is indeed fair. 🙂
- Sqoop — the general way to connect relational DBMS to Hadoop, which DataStax sometimes calls “RDBMS integration”.
- Solr — the search-centric Apache project, or big parts of it, which DataStax generally calls either “Solr” or “Solr compatibility”.
- log4j — an Apache project that has something or other to do with logging, or parts of it, which DataStax sometimes calls “log file integration”.
- DataStax OpsCenter — some management tools and so on around Cassandra and the rest of the product line.
DataStax stresses that all this runs on the same cluster, with the same administrative tools and so on. For example, on a single cluster:
- You can manage the interactive data for a web site.
- You can store the logs for that website.
- You can analyze all of the above in Hadoop.
Juggling analytic databases
I’d like to survey a few related ideas:
- Enterprises should each have a variety of different analytic data stores.
- Vendors — especially but not only IBM and Teradata — are acknowledging and marketing around the point that enterprises should each have a number of different analytic data stores.
- In addition to having multiple analytic data management technology stacks, it is also desirable to have an agile way to spin out multiple virtual or physical relational data marts using a single RDBMS. Vendors are addressing that need.
- Some observers think that the real essence of analytic data management will be in data integration, not the actual data management.
Here goes. Read more
Kinds of data integration and movement
“Data integration” can mean many different things, to an extent that’s impeding me from writing about the area. So I’ll start by simply laying out some of the myriad ways that data can be brought to where it is needed, and worry about other subjects later. Yes, this is a massive wall of text, and incomplete even so — but that in itself is my central point.
There are two main paradigms for data integration:
- Movement or replication — you take data from one place and copy it to another.
- Federation — you treat data in multiple different places logically as if it were all in one database.
Data movement and replication typically take one of three forms:
- Logical, transactional, or trigger-based — sending data across the wire every time an update happens, or as the result of a large-result-set query/extract, or in response to a specific request.
- Log-based — like logical replication, but driven by the transaction/update log rather than the core data management mechanism itself, so as to avoid directly overstressing the DBMS.
- Block/file-based — sending chunks of data, and expecting the target system to store them first and only make sense of them afterward.
Beyond the core functions of movement, replication, and/or federation, there are other concerns closely connected to data integration. These include:
- Transparency and emulation, e.g. via a layer of software that makes data in one format look like it’s in another. (If memory serves, this is the use case for which Larry DeBoever coined the term “middleware.”)
- Cleaning and quality — with new uses of data can come new requirements for accuracy.
- Master, reference, or canonical data —
- Archiving and information preservation — part of keeping data safe is ensuring that there are copies at various physical locations. Another part can be making it logically tamper-proof, or at least highly auditable.
In particular, the following are largely different from each other. Read more
Categories: Clustering, Data integration and middleware, EAI, EII, ETL, ELT, ETLT, eBay, Hadoop, MapReduce | 10 Comments |
Departmental analytics — best practices
I believe IT departments should support and encourage departmental analytics efforts, where “support” and “encourage” are not synonyms for “control”, “dominate”, “overwhelm”, or even “tame”. A big part of that is:
Let, and indeed help, departments have the data they want, when they want it, served with blazing performance.
Three things that absolutely should NOT be obstacles to these ends are:
- Corporate DBMS standards.
- Corporate data governance processes.
- The difficulties of ETL.
Categories: Business intelligence, Data mart outsourcing, Data warehousing, EAI, EII, ETL, ELT, ETLT, Predictive modeling and advanced analytics | 4 Comments |
Agile predictive analytics — the “easy” parts
I’m hearing a lot these days about agile predictive analytics, albeit rarely in those exact terms. The general idea is unassailable, in that it boils down to using data as quickly as reasonably possible. But discussing particulars is hard, for several reasons:
- Pundits tend to sketch castles in the air.
- Vendors tend to confuse part of the story — generally the part they happen to offer 🙂 — with the whole.
- Different use cases give rise to different kinds of issues.
At least three of the generic arguments for agility apply to predictive analytics:
- Doing the correct thing soon is usually better than doing the same correct thing later.
- If it doesn’t take much time to do something, hopefully it doesn’t take that much expense (labor and so on) either.
- It’s hard to get new stuff completely right on the first try. Often, the best strategy is to come close fast, then fix what’s still not ideal.
But the reasons to want agile predictive analytics don’t stop there.
Categories: EAI, EII, ETL, ELT, ETLT, Investment research and trading, Predictive modeling and advanced analytics | 15 Comments |
QlikView 11 and the rise of collaborative BI
QlikView 11 came out last month. Let me start by pointing out:
- As one might expect, QlikView 11 contains fairly leading-edge stuff, but also some “better late than never” features.
- The leading-edge stuff is concentrated in the general area of “collaboration”.
- Additionally, QlikTech is always pushing the QlikView user interface ahead in various ways.
- The “Well, it’s about time!” feature list starts with the ability to load QlikView via third-party ETL tools (Informatica now, others coming).
- QlikTech is generally good at putting up pretty pictures of its product. You can find some in the “What’s New in QlikView 11” document via a general QlikView resource page.*
- Stephen Swoyer wrote a good article summarizing QlikView 11.
*One confusing aspect to that paper: non-standard uses of the terms “analytic app” and “document”.
As QlikTech tells it, QlikView 11 adds two kinds of collaboration features:
- Integration with social media, which QlikTech calls “asynchronous integration.”
- Direct sharing of the QlikView UI, which QlikTech calls “synchronous integration.”
I’d add a third kind, because QlikView 11 also takes some baby steps toward what I regard as a key aspect of BI collaboration — the ability to define and track your own metrics. It’s way, way short of what I called for in metric flexibility in a post last year, but at least it’s a small start.
MarkLogic’s Hadoop connector
It’s time to circle back to a subject I skipped when I otherwise wrote about MarkLogic 5: MarkLogic’s new Hadoop connector.
Most of what’s confusing about the MarkLogic Hadoop Connector lies in two pairs of options it presents you:
- Hadoop can talk XQuery to MarkLogic. But alternatively, Hadoop can use a long-established simple(r) Java API for streaming documents into or out of a MarkLogic database.
- Hadoop can make requests to MarkLogic in MarkLogic’s normal mode of operation, namely to address any node in the MarkLogic cluster, which then serves as a “head” node for the duration of that particular request. But alternatively, Hadoop can use a long-standing MarkLogic option to circumvent the whole DBMS cluster and only talk to one specific MarkLogic node.
Otherwise, the whole thing is just what you would think:
- Hadoop can read from and write to MarkLogic, in parallel at both ends.
- If Hadoop is just writing to MarkLogic, there’s a good chance the process is properly called “ETL.”
- If Hadoop is reading a lot from MarkLogic, there’s a good chance the process is properly called “batch analytics.”
MarkLogic said that it wrote this Hadoop connector itself.