Oracle
Analysis of software titan Oracle and its efforts in database management, analytics, and middleware. Related subjects include:
- Oracle TimesTen
- (in The Monash Report)Operational and strategic issues for Oracle
- (in Software Memories) Historical notes on Oracle
- Most of what’s written about in this blog
Notes on the Oracle Database 11g Release 2 white paper
The Oracle Database 11g Release 2 white paper I cited a couple of weeks ago has evidently been edited, given that a phrase I quoted last month is no longer to be found. Anyhow, here are some quotes from and comments on what evidently is the latest version. Read more
Some issues in comparing analytic DBMS performance
The analytic DBMS/data warehouse appliance market is full of competitive performance claims. Sometimes, they’re completely fabricated, with no basis in fact whatsoever. But often performance-advantage claims are based on one or more head-to-head performance comparisons. That is, System A and System B are used to run the same set of queries, and some function is applied that takes the two sets of query running times as an input, and spits out a relative performance number as an output. Read more
Categories: Data warehousing, Exadata, Oracle, SAP AG | 1 Comment |
Oracle gives a few customer database size examples
In its recent quarterly conference call, Oracle said (as per the Seeking Alpha transcript):
AC Neilsen, for instance, we deployed a 45-terabyte data [mart], they called it; Adidas, 13 terabytes; Australian Bureau of Statistics, 250 terabytes; and of course, some of our high-end ones that you have probably heard of in the past, AT&T, 250 terabytes; Yahoo!, 700 terabytes — just gives you an idea of the size of the databases that are out there and how they are growing, and that’s driving the need for greater throughput.
I don’t know what’s being counted there, but I wouldn’t be surprised if those were legit user-data figures.
Some other notes:
- The Yahoo database is of course Yahoo’s first-generation data warehouse, which has been largely superseded by an internal system more than 10X that size. (Edit: Actually, Greg Rahn of Oracle says below that it’s a different database.)
- I’m keynoting the Netezza road show this month, and Nielsen is up there on stage touting Netezza. (Edit: Nielsen indeed does the overwhelming majority of its data warehousing on Netezza.)
- I’d be surprised if AT&T’s largest data warehouse were “only” 250 terabytes in size. (Edit: Actually, I am told the database in question is 310 TB of user data and growing. More later, hopefully.)
- Oracle didn’t exactly say that those were Exadata installations.
Categories: Analytic technologies, Data warehousing, Exadata, Netezza, Oracle, Specific users, Telecommunications, Web analytics, Yahoo | 10 Comments |
What could or should make Oracle/MySQL antitrust concerns go away?
When the Oracle/MySQL deal was first announced, I wrote:
I can probably come up with business practices that could make things very hard on Oracle/MySQL competitors … but I haven’t found a compelling antitrust trigger on my first pass over the subject.
Subsequently, there’s been a lot of discussion about whether or not Oracle can use control of MySQL to make life difficult for third-party MySQL storage engine vendors.
Now that the European Commission is delaying the Oracle/Sun deal, explicitly because of Oracle/MySQL antitrust fears. That is, the European Commission wants to be reassured that an Oracle takeover of MySQL won’t unduly impinge upon the future availability of open source/low cost DBMS alternatives. This raises that natural question:
What could Oracle do to assure concerned parties that its ownership of MySQL won’t unduly hamper open-source-based DBMS competition?
I think that’s indeed the crucial question. The Oracle/Sun deal has enough momentum at this point that it both should and will be allowed to happen — perhaps with safeguards — rather than banned outright. If you have concerns about Oracle’s pending acquisition of MySQL, you should speak up and outline what kinds of regulatory safeguards would alleviate the problems you foresee.
More or less obvious possibilities include:
- Divest MySQL. This is obviously an extreme measure, but it surely would work.
- Provide some money and trademark rights to MySQL forkers. If MariaDB and Drizzle were put into strong competitive positions with MySQL today, it’s hard to argue how regulators could object to any future Oracle maneuverings Oracle might envision with the GPLed side of MySQL.
- Offer a standard, attractive, long-term deal to MySQL bundlers. The commercial/non-GPL version of MySQL is a requirement for appliance vendors (surely), OEM vendors (probably), and storage engine vendors (maybe — I disagree, but I’m evidently in the minority).
- Strengthen PostgreSQL. 🙂 Realistically, that’s not going to be part of any Oracle/MySQL resolution, so I’ll leave it as a subject for another time.
Categories: Mid-range, MySQL, Open source, Oracle, PostgreSQL | 9 Comments |
Oracle Exadata hybrid columnar compression
Oracle Database 11g Release 2 is out, and as usual I wasn’t briefed — perhaps because Oracle is more scared than its competitors are of hard questions, perhaps for some other reason entirely.* Anyhow, Oracle Database 11 Release 2 contains an Exadata-only feature called hybrid columnar compression. The Oracle Database 11g Release 2 white paper says “data is grouped, ordered, and stored one column at a time.” But Kevin Closson clarifies:
The word hybrid is important.
Rows are still used. They are stored in an object called a Compression Unit. Compression Units can span multiple blocks. Like values are stored in the compression unit with metadata that maps back to the rows.
So, “hybrid” is the word. But, none of that matters as much as the effectiveness. This form of compression is extremely effective.
That sounds a whole lot like PAX. Specifically, in Oracle’s case I would guess “hybrid columnar compression” provides the compression benefits of column stores, but not column stores’ I/O benefits, and also not any kind of in-memory compression. Read more
Categories: Columnar database management, Data warehousing, Database compression, Exadata, Oracle, Theory and architecture | 20 Comments |
Bottleneck Whack-A-Mole
Developing a good software product is often a process of incremental improvement. Obviously, that can happen in the case of feature addition or bug-fixing. Less obviously, there’s also great scope for incremental improvement in how the product works at its core.
And it goes even further. For example, I was told by a guy who is now a senior researcher at Attivio: “How do you make a good speech recognition product? You start with a bad one and keep incrementally improving it.”
In particular, I’ve taken to calling the process of enhancing a product’s performance across multiple releases “Bottleneck Whack-A-Mole” (rhymes with guacamole). This is a reference to the Whack-A-Mole arcade game,* the core idea of which is:
- An annoying mole pops its head up.
- You whack it with a mallet.
- Another pops its head up.
- You whack that one.
- Repeat, as mole_count increments to a fairly large integer.
Categories: Data warehousing, Exadata, Fun stuff, Netezza, Oracle, Theory and architecture | 24 Comments |
Sorting out Netezza and Oracle Exadata data warehouse appliance pricing
Netezza recently announced a new generation of data warehouse appliance called TwinFin. TwinFin’s clearest stated list price is “a little under $20,000 per terabyte of user data,” which in my opinion immediately became the new industry reference point for discussing prices in the data warehouse appliance category. Vigorous discussion ensued, especially in the comment thread to the first of the two posts linked above. Here’s some followup.
Netezza should not have claimed a “10-15X price/performance improvement,” based on a 3-5X performance improvement and a 3X decrease in price/terabyte, and I should have grilled Netezza harder when it first made the claim. In fact, there is no unit of performance that you can, in a reasonable blended average, get 10-15X more of per dollar in TwinFin than you can in the predecessor NPS series.
Categories: Data warehousing, Exadata, Netezza, Oracle, Pricing | 19 Comments |
“The Netezza price point”
Over the past couple of years, quite a few data warehouse appliance or DBMS vendors have talked to me directly in terms of “Netezza’s price point,” or some similar phrase. Some have indicated that they’re right around the Netezza price point, but think their products are superior to Netezza’s. Others have stressed the large gap between their price and Netezza’s. But one way or the other, “Netezza’s price” has been an industry metric.
One reason everybody talks about the “Netezza (list) price” is that it hasn’t been changing much, seemingly staying stable at $50-60K/terabyte for a long time. And thus Teradata’s 2550 and Oracle’s larger-disk Exadata configuration — both priced more or less in the same range — have clearly been price-competitive with Netezza since their respective introductions.
That just changed. Netezza is cutting its pricing to the $20K/terabyte range imminently, with further cuts to come. So where does that leave competitors?
- The Teradata 1550 is in the Netezza price range (still a little below, actually).
- Oracle basically has nothing price-competitive with Netezza.
- Microsoft has stated it plans to introduce Madison below the old DATAllegro price points; conceivably, that could be competitive with Netezza’s new pricing, although I haven’t checked as to how much it now costs simply to buy a lot of SQL Server licenses (which presumably would be a Madison lower bound, and might except for hardware be the whole thing, since Microsoft likes to create large product bundles).
- XtremeData just launched in the new Netezza price range.
- Troubled Dataupia is hard to judge. While on the surface Dataupia’s prices sound very low, you can’t use a Dataupia box unless you also have a brand-name DBMS (license and hardware) alongside it. That obviously affects total cost significantly.
- Kickfire seems unaffected, as it doesn’t and most likely won’t compete with Netezza (different database size ranges).
- For the most part, software-only vendors are free to adapt or not as they choose. Hardware prices generally don’t need to be over $10K/terabyte, and in some cases could be a lot less. So the question is how far they’re willing to discount their software.
Categories: Analytic technologies, Data warehouse appliances, Data warehousing, Dataupia, Exadata, Kickfire, Oracle, Pricing, Teradata, XtremeData | 14 Comments |
Initial reactions to IBM acquiring SPSS
IBM is acquiring SPSS. My initial thoughts (questions by Eric Lai of Computerworld) include:
1) good buy for IBM? why or why not?
Yes. The integration of predictive analytics with other analytic or operational technologies is still ahead of us, so there was a lot of value to be gained from SPSS beyond what it had standalone. (That said, I haven’t actually looked at the numbers, so I have no comment on the price.)
By the way, SPSS coined the phrase “predictive analytics”, with the rest of the industry then coming around to use it. As with all successful marketing phrases, it’s somewhat misleading, in that it’s not wholly focused on prediction.
2) how does it position IBM vs. competitors?
IBM’s ownership immediately makes SPSS a stronger competitor to SAS. Any advantage to the rest of IBM depends on the integration roadmap and execution.
3) How does this particularly affect SAP and SAS and Oracle, IBM’s closest competitors by revenue according to IDC’s figures?
If one of Oracle or SAP had bought SPSS, it would have given them a competitive advantage against the other, in the integration of predictive analytics with packaged operational apps. That’s a missed opportunity for each.
One notable point is that SPSS is more SQL-oriented than SAS. Thus, SPSS has gotten performance benefits from Oracle’s in-database data mining technology that SAS apparently hasn’t.
IBM’s done a good job of keeping its acquired products working well with Oracle and other competitive DBMS in the past, and SPSS will surely be no exception.
Obviously, if IBM does a good job of Cognos/SPSS integration, that’s bad for competitors, starting with Oracle and SAP/Business Objects. So far business intelligence/predictive analytics integration has been pretty minor, because nobody’s figured out how to do it right, but some day that will change. Hmm — I feel another “Future of … ” post coming on.
4) Do you predict further M&A?
Always. 🙂
Related links
- Official word from SPSS and IBM
- Blog posts from Larry Dignan and James Taylor
- James Kobelius‘s post, which includes the obvious point that Oracle — unlike SAP — has pretty decent data mining of its own
- Eric Lai‘s actual article
Categories: Analytic technologies, Cognos, IBM and DB2, Oracle, SAP AG, SAS Institute | 8 Comments |
Oracle cites Exadata wins
A couple of weeks ago, Oracle put out a press release about Exadata wins. Highlights include:
- 20 names of actual customers.
- One quote citing a competitive win (over Netezza)
- One quote citing a ~50X speedup of one query “without manual tuning”
- One quote citing consistent 10-72X query performance speedups
- One quote citing a speedup from “days” to “minutes”
Unless I missed it, none of the quotes implied Exadata was actually in production, and none compared hardware between the old/slow/production and Exadata/fast/test systems.