Application areas
Posts focusing on the use of database and analytic technologies in specific application domains. Related subjects include:
- Any subcategory
- (in Text Technologies) Specific application areas for text analytics
Introduction to Pentaho
I finally caught up with Pentaho, which along with Jaspersoft is one of the two most visible open source business intelligence companies, Actuate perhaps excepted. Highlights included:
- Much like Jaspersoft, Pentaho’s initial focus was mainly on embedded, operational BI.
- However, Pentaho now feels it has a decent end-user GUI as well, and traditional-BI is a bigger part of sales.
- Also, some sales are focused on data integration, perhaps in support of more traditional BI products. Pentaho has even had an Ab Initio replacement in data integration. (Can there be any change more extreme than going from Ab Initio to open source?)
- As an example of technical breadth, Pentaho says that its Mondrian OLAP engine is used by Jaspersoft.
- Pentaho has Excel output, but not in the form of live formulas.
- Pentaho does XQuery.
- Industries with more Pentaho adoption than average include:
- Financial services (traditionally open-source-friendly, according to Pentaho)
- Government (ditto)
- Web 2.0 (obviously ditto)
- Travel/transportation (cash-strapped)
- Frontier Airlines is a Pentaho/Greenplum customer.
- TradeDoubler is a Pentaho/InfoBright customer. (Pentaho thinks that TradeDoubler reloads its warehouse every day, which if true frankly casts some doubt on InfoBright’s architecture.)
- Data mining is something of a Pentaho sideline. There’s some university in New Zealand that built data mining capabilities in Pentaho, and some data mining research is done in that. Separately, Pentaho has been integrated with R.
- Community contributions are concentrated in the areas you’d expect — features some user or system integrator needs for a specific project, connectors, bug reports, and the like.
Categories: Ab Initio Software, Application areas, Business intelligence, Data integration and middleware, EAI, EII, ETL, ELT, ETLT, Greenplum, Infobright, Jaspersoft, Pentaho, Pricing | 7 Comments |
Kognitio and WX-2 update
I went to Bracknell Wednesday to spend time with the Kognitio team. I think I came away with a better understanding of what the technology is all about, and why certain choices have been made.
Like almost every other contender in the market,* Kognitio WX-2 queries disk-based data in the usual way. Even so, WX-2’s design is very RAM-centric. Data gets on and off disk in mind-numbingly simple ways – table scans only, round-robin partitioning only (as opposed to the more common hash), and no compression. However, once the data is in RAM, WX-2 gets to work, happily redistributing as seems optimal, with little concern about which node retrieved the data in the first place. (I must confess that I don’t yet understand why this strategy doesn’t create ridiculous network bottlenecks.) How serious is Kognitio about RAM? Well, they believe they’re in the process of selling a system that will include 40 terabytes of the stuff. Apparently, the total hardware cost will be in the $4 million range.
*Exasol is the big exception. They basically use disk as a source from which to instantiate in-memory databases.
Other technical highlights of the Kognitio WX-2 story include: Read more
Categories: Application areas, Data warehousing, Kognitio, Scientific research | 2 Comments |
Data warehouse load speeds in the spotlight
Syncsort and Vertica combined to devise and run a benchmark in which a data warehouse got loaded at 5 ½ terabytes per hour, which is several times faster than the figures used in any other vendors’ similar press releases in the past. Takeaways include:
- Syncsort isn’t just a mainframe sort utility company, but also does data integration. Who knew?
- Vertica’s design to overcome the traditional slow load speed of columnar DBMS works.
The latter is unsurprising. Back in February, I wrote at length about how Vertica makes rapid columnar updates. I don’t have a lot of subsequent new detail, but it made sense then and now. Read more
Big scientific databases need to be stored somehow
A year ago, Mike Stonebraker observed that conventional DBMS don’t necessarily do a great job on scientific data, and further pointed out that different kinds of science might call for different data access methods. Even so, some of the largest databases around are scientific ones, and they have to be managed somehow. For example:
- Microsoft just put out an overwrought press release. The substance seems to be that Pan-STARRS — a Jim Gray legacy also discussed in an August, 2008 Computerworld article — is adding 1.4 terabytes of image data per night, and one not so new database adds 15 terabytes per year of some kind of computer simulation output used to analyze protein folding. Both run on SQL Server, of course.
- Kognitio has an astronomical database too, at Cambridge University, adding 1/2 a terabyte of data per night.
- Oracle is used for a McGill University proteonomics database called CellMapBase. A figure of 50 terabytes of “mass storage” is included, which doesn’t include tape backup and so on.
- The Large Hadron Collider, once it actually starts functioning, is projected to generate 15 petabytes of data annually, which will be initially stored on tape and then distributed to various computing centers around the world.
- Netezza is proud of its ability to serve images and the like quickly, although off the top of my head I’m not thinking of a major customer it has in that area. (But then, if you just sell software, your academic discount can approach 100%; but if like Netezza you have an actual cost of goods sold, that’s not as appealing an option.)
Long-term, I imagine that the most suitable DBMS for these purposes will be MPP systems with strong datatype extensibility — e.g., DB2, PostgreSQL-based Greenplum, PostgreSQL-based Aster nCluster, or maybe Oracle.
Categories: Aster Data, Data types, Greenplum, IBM and DB2, Kognitio, Microsoft and SQL*Server, Netezza, Oracle, Parallelization, PostgreSQL, Scientific research | 1 Comment |
Update on Aster Data Systems and nCluster
I spent a few hours at Aster Data on my West Coast swing last week, which has now officially put out Version 3 of nCluster. Highlights included: Read more
Coral8 proposes CEP as a BI data platform
It used to be that Coral8 and StreamBase were the two complex event/stream processing (CEP) vendors most committed to branching out beyond the super-low-latency algorithmic trading marketing. But StreamBase seems to have pulled in its horns after a management change, focusing much more on the financial market (and perhaps the defense/intelligence market as well). Aleri, Truviso, and Progress Apama, while each showing signs of branching out, don’t seem to have gone as far as Coral8 yet. And so, though it’s a small company with not all that many dozens of customers, my client Coral8 seems to be the one to look at when seeing whether CEP really is relevant to a broad range of mainstream – no pun intended – applications.
Coral8 today unveiled a new product release – the not-so-concisely named “Coral8 Engine and Portal Release 5.5” – and a new buzzphrase — “Continuous Intelligence.” The interesting part boils down to this:
Coral8 is proposing CEP — excuse me, “Continuous Intelligence” — as a data-store-equivalent for business intelligence.
This includes both operational BI (the current sweet spot) and dashboards (the part with cool, real-time-visualization demos). Read more
Aster Data on online marketing data warehousing
Aster Data’s blog is getting to be like Vertica’s, in that I find myself recommending a large fraction of its posts.
The virtue of the latest one is that it strings together several customer examples in related areas of online marketing (which is pretty much the only sector Aster has so far sold into). I’ve tended to overgeneralize a bit, and use terms like “web analytics” or “clickstream analysis” even when they don’t wholly apply. The Aster post is a good antidote to that.
Categories: Application areas, Aster Data, Data warehousing, Web analytics | 1 Comment |
Vertical market XML standards
Tracking the alphabet soup of vertical market XML standards is hard. So as a starting point, I’m splitting a list I got from IBM into a standalone post.
Among the most important or successful IBM pureXML–supported standards, in terms of downloads and other evidence of customer interest, are: Read more
Categories: Application areas, EAI, EII, ETL, ELT, ETLT, IBM and DB2, pureXML, Structured documents | 2 Comments |
Oracle Database Machine performance and compression
Greg Rahn was kind enough to recite in his blog what Oracle has disclosed about the first Exadata testers. I don’t track hardware model details, so I don’t know how the testers’ respective current hardware environments compare to that of the Oracle Database Machine.
Each of the customers cited below received “half” an Oracle Database Machine. As I previously noted, an Oracle Database Machine holds either 14.0 or 46.2 terabytes of uncompressed data. This suggests the 220 TB customer listed below — LGR Telecommunications — got compression of a little under 10:1 for a CDR (Call Detail Record) database. By comparison, Vertica claims 8:1 compression on CDRs.
Greg also writes of POS (Point Of Sale) data being used for the demo. If you do the arithmetic on the throughput figures (13.5 vs. a little over 3), compression was a little under 4.5:1. I don’t know what other vendors claim for POS compression.
Here are the details Greg posted about the four most open Oracle Database Machine tests: Read more
Categories: Data warehouse appliances, Data warehousing, Database compression, Exadata, Oracle, Telecommunications | 9 Comments |
Some of Oracle’s largest data warehouses
Googling around, I came across an Oracle presentation – given some time this year – that lists some of Oracle’s largest data warehouses. 10 databases total are listed with >16 TB, which is fairly consistent with Larry Ellison’s confession during the Exadata announcement that Oracle has trouble over 10 TB (which is something I’ve gotten a lot of flack from a few Oracle partisans for pointing out … 😀 ).
However, what’s being measured is probably not the same in all cases. For example, I think the Amazon 70 TB figure is obviously for spinning disk (elsewhere in the presentation it’s stated that Amazon has 71 TB of disk). But the 16 TB British Telecom figure probably is user data — indeed, it’s the same figure Computergram cited for BT user data way back in 2001.
The list is: Read more
Categories: Data warehousing, Oracle, Specific users, Telecommunications, Yahoo | 6 Comments |