Oracle
Analysis of software titan Oracle and its efforts in database management, analytics, and middleware. Related subjects include:
- Oracle TimesTen
- (in The Monash Report)Operational and strategic issues for Oracle
- (in Software Memories) Historical notes on Oracle
- Most of what’s written about in this blog
Gartner’s 2009 Magic Quadrant for Business Intelligence
A few days ago I tore into the Gartner Magic Quadrant for Data Warehouse DBMS. Well, the 2009 Gartner Magic Quadrant for Business Intelligence Platforms is out too. Unlike the data warehouse MQ, Gartner’s BI MQ clusters its “Leaders” together tightly. But while less bold, the Business Intelligence Magic Quadrant’s claims are just as questionable as those in data warehousing.
February, 2011 edit: Here’s a partial link that works right now.
Of course, some parts do make sense. E.g.: Read more
Netezza’s marketing goes retro again
Netezza loves retro images in its marketing, such as classic rock lyrics, or psychedelic paint jobs on its SPUs. (Given the age demographics at, say, a Teradata or Netezza user conference, this isn’t as nutty as it first sounds.) Netezza’s latest is a creative peoples-liberation/revolution riff, under the name Data Liberators. The ambience of that site and especially its first download should seem instinctively familiar to anybody who recalls the Symbionese Liberation Army when it was active, or who has ever participated in a chant of “The People, United, Will Never Be Defeated!”
The substance of the first “pamphlet”, so far as I can make out, is that you should only trust vendors who do short, onsite POCs, and Oracle may not do those for Exadata. Read more
Categories: Benchmarks and POCs, Buying processes, Data warehouse appliances, Exadata, Netezza, Oracle | 2 Comments |
Gartner’s 2008 data warehouse database management system Magic Quadrant is out
February, 2011 edit: I’ve now commented on Gartner’s 2010 Data Warehouse Database Management System Magic Quadrant as well.
Gartner’s annual Magic Quadrant for data warehouse DBMS is out. Thankfully, vendors don’t seem to be taking it as seriously as usual, so I didn’t immediately hear about it. (I finally noticed it in a Greenplum pay-per-click ad.) Links to Gartner MQs tend to come and go, but as of now here are two working links to the 2008 Gartner Data Warehouse Database Management System MQ. My posts on the 2007 and 2006 MQs have also been updated with working links. Read more
High-performance analytics
For the past few months, I’ve collected a lot of data points to the effect that high-performance analytics – i.e., beyond straightforward query — is becoming increasingly important. And I’ve written about some of them at length. For example:
- MapReduce – controversial or in some cases even disappointing though it may be – has a lot of use cases.
- It’s early days, but Netezza and Teradata (and others) are beefing up their geospatial analytic capabilities.
- Memory-centric analytics is in the spotlight.
Ack. I can’t decide whether “analytics” should be a singular or plural noun. Thoughts?
Another area that’s come up which I haven‘t blogged about so much is data mining in the database. Data mining accounts for a large part of data warehouse use. The traditional way to do data mining is to extract data from the database and dump it into SAS. But there are problems with this scenario, including: Read more
Categories: Aster Data, Data warehousing, EAI, EII, ETL, ELT, ETLT, Greenplum, MapReduce, Netezza, Oracle, Parallelization, SAS Institute, Teradata | 6 Comments |
Beyond query
I sometimes describe database management systems as “big SQL interpreters,” because that’s the core of what they do. But it’s not all they do, which is why I describe them as “electronic file clerks” too. File clerks don’t just store and fetch data; they also put a lot of work into neatening, culling, and generally managing the health of their information hoards.
Already 15 years ago, online backup was as big a competitive differentiator in the database wars as any particular SQL execution feature. Security became important in some market segments. Reliability and availability have been important from the getgo. And manageability has been crucial ever since Microsoft lapped Oracle in that regard, back when SQL Server had little else to recommend it except price.*
*Before Oracle10g, the SQL Server vs. Oracle manageability gap was big.
Now data warehousing is demanding the same kinds of infrastructure richness.* Read more
Categories: Data warehousing, Microsoft and SQL*Server, Oracle | 1 Comment |
Big scientific databases need to be stored somehow
A year ago, Mike Stonebraker observed that conventional DBMS don’t necessarily do a great job on scientific data, and further pointed out that different kinds of science might call for different data access methods. Even so, some of the largest databases around are scientific ones, and they have to be managed somehow. For example:
- Microsoft just put out an overwrought press release. The substance seems to be that Pan-STARRS — a Jim Gray legacy also discussed in an August, 2008 Computerworld article — is adding 1.4 terabytes of image data per night, and one not so new database adds 15 terabytes per year of some kind of computer simulation output used to analyze protein folding. Both run on SQL Server, of course.
- Kognitio has an astronomical database too, at Cambridge University, adding 1/2 a terabyte of data per night.
- Oracle is used for a McGill University proteonomics database called CellMapBase. A figure of 50 terabytes of “mass storage” is included, which doesn’t include tape backup and so on.
- The Large Hadron Collider, once it actually starts functioning, is projected to generate 15 petabytes of data annually, which will be initially stored on tape and then distributed to various computing centers around the world.
- Netezza is proud of its ability to serve images and the like quickly, although off the top of my head I’m not thinking of a major customer it has in that area. (But then, if you just sell software, your academic discount can approach 100%; but if like Netezza you have an actual cost of goods sold, that’s not as appealing an option.)
Long-term, I imagine that the most suitable DBMS for these purposes will be MPP systems with strong datatype extensibility — e.g., DB2, PostgreSQL-based Greenplum, PostgreSQL-based Aster nCluster, or maybe Oracle.
Categories: Aster Data, Data types, Greenplum, IBM and DB2, Kognitio, Microsoft and SQL*Server, Netezza, Oracle, Parallelization, PostgreSQL, Scientific research | 1 Comment |
Oracle notes
I spent about six hours at Oracle today — talking with Andy Mendelsohn, Ray Roccaforte, Juan Loaiza, Cetin Ozbutun, et al. — and plan to write more later. For now, let me pass along a few quick comments. Read more
Categories: Data warehousing, Exadata, Oracle, Parallelization, Pricing, Storage, Theory and architecture | 10 Comments |
A data warehouse pricing complication: Software vs. appliances
Juan Loaiza of Oracle disagrees with a number of my opinions. We plan to talk about some of that when I visit on Thursday, after Teradata Partners. 🙂 But I’d like to throw one of his ideas out there right now. Juan contends that comparisons of Oracle Exadata pricing are apt to be misleading because — among other reasons — Oracle licenses can be reused on other hardware, in ways that appliance software can not. (The same reasoning would of course apply to almost everybody else except Teradata and Netezza.) Read more
Categories: Data warehouse appliances, Data warehousing, Exadata, Oracle, Pricing | 2 Comments |
Automatic redistribution of data warehouse data
In a recent Oracle Exadata FAQ, Kevin Closson writes:
Q. […] don’t some of the DW vendors split the data up in a shared nothing method. Thus when the data has to be repartitioned it gets expensive. Whereas here you just add another cell and ASM goes to work in the background. (depending upon the ASM power level you set.)
A. All the DW Appliance vendors implement shared-nothing so, yes, the data is chopped up into physical partitions. If you add hardware to increase performance of queries against your current dataset the data will have to be reloaded into the new partitioning scheme. As has always been the case with ASM, adding new disks-and therefore Exadata Storage Server cells-will cause the existing data to be redistributed automatically over all (including the new) drives. This ASM data redistribution is an online function.
Hmm. That sounds much like the story I’ve heard from various other data warehousing DBMS vendors as well.
Rather than try to speak for them, however, I’ll just post this and see whether they choose to add anything to the comment thread.
Categories: Data warehouse appliances, Data warehousing, Exadata, Oracle | 7 Comments |
Oracle crosses the line on integrity :(
Dana Gardner did a puff interview with Oracle and HP regarding Exadata, and clearly disclosed sponsorship up top. So far, so good. My sponsored work is a lot more independent than that, but I’m probably an outlier at the other extreme. Gardner’s view of what’s ethical in this regard is a common one, and the point of this post isn’t to argue with his choices in that regard, nor of those who hired him.
Where things went badly awry is on an Oracle corporate blog, which said: Read more
Categories: Exadata, Oracle | 6 Comments |