Oracle
Analysis of software titan Oracle and its efforts in database management, analytics, and middleware. Related subjects include:
- Oracle TimesTen
- (in The Monash Report)Operational and strategic issues for Oracle
- (in Software Memories) Historical notes on Oracle
- Most of what’s written about in this blog
Eight kinds of analytic database (Part 1)
Analytic data management technology has blossomed, leading to many questions along the lines of “So which products should I use for which category of problem?” The old EDW/data mart dichotomy is hopelessly outdated for that purpose, and adding a third category for “big data” is little help.
Let’s try eight categories instead. While no categorization is ever perfect, these each have at least some degree of technical homogeneity. Figuring out which types of analytic database you have or need — and in most cases you’ll need several — is a great early step in your analytic technology planning. Read more
Observations on Oracle pricing
A couple of months ago, Oracle asked me to pull some observations on pricing until after the earnings call that just occurred, and I grudgingly acquiesced. In the interim, more information on Oracle pricing has emerged (including in the comment thread to that post). The original notes are:
Oracle disputes some common claims about its cost and pricing. In particular, Oracle software maintenance costs a fixed 22% of your annual license price, so if you get a discount on your licenses, it ripples through to your maintenance. This is true even if you have an all-you-can-eat ULA (Unlimited License Agreement).
- Based on that, Oracle contends that Exadata isn’t all that expensive if you have a suitable ULA. You have to buy the hardware and the storage software, but the database server software is effectively free. (Whether your use of additional licenses affect the price of your ULA when it comes up for renewal might, of course, be a different matter.)
- Nothing in that discussion obviates the point that if you’re just using Oracle Standard Edition, upgrading to Oracle Enterprise Edition, associated chargeable options, and/or Exadata can be seriously expensive.
Forthcoming Oracle appliances
Edit: I checked with Oracle, and it’s indeed TimesTen that’s supposed to be the basis of this new appliance, as per a comment below. That would be less cool, alas.
Oracle seems to have said on yesterday’s conference call Oracle OpenWorld (first week in October) will feature appliances based on Tangosol and Hadoop. As I post this, the Seeking Alpha transcript of Oracle’s call is riddled with typos. Bolded comments below are by me. Read more
Categories: Data warehouse appliances, Hadoop, In-memory DBMS, MapReduce, Memory-centric data management, Object, Oracle | 8 Comments |
Notes and links, June 15, 2011
Five things: Read more
Slashdot venting thread about Oracle/Sun hardware
Slashdot has what amounts to a venting thread about Oracle/Sun hardware. The one consistent favorable theme is that Sun hardware is good stuff if you want to run Oracle. Otherwise, comments repeatedly say:
- Product discounts are down, effectively creating a price increase.
- Service prices are way up for some customers, because cheaper service options have been eliminated.
- Service quality is unsatisfactory.
- Oracle is difficult to do business with.
So far, I haven’t seen any comments to the effect “I don’t know what you guys are talking about; we’re perfectly happy with Sun”, but surely those will come too.
Categories: Oracle, Pricing | 4 Comments |
Quick thoughts on Oracle-on-Amazon
Amazon has a page up for what it calls Amazon RDS for Oracle Database. You can rent Amazon instances suitable for running Oracle, and bring your own license (BYOL), or you can rent a “License Included” instance that includes Oracle Standard Edition One (a cheap version of Oracle that is limited to two sockets).
My quick thoughts start:
- Mainly, this isn’t for production usage. But exceptions might arise when:
- An application, from creation to abandonment, is only expected to have a short lifespan, in support of a specific project.
- There is an extreme internal-politics bias to operating versus capital expenses, or something like that, forcing a user department to cloud production deployment even when it doesn’t make much rational sense.
- An application is small enough, or the situation is sufficiently desperate, that any inefficiencies are outweighed by convenience.
- There is non-production appeal. In particular:
- Spinning up a quick cloud instance can make a lot of sense for a developer.
- The same goes if you want to sell an Oracle-based application and need to offer demo/test capabilities.
- The same might go for off-site replication/disaster recovery.
Of course, those are all standard observations every time something that’s basically on-premises software is offered in the cloud. They’re only reinforced by the fact that the only Oracle software Amazon can actually license you is a particularly low-end edition.
And Oracle is indeed on-premises software. In particular, Oracle is hard enough to manage when it’s on your premises, with a known hardware configuration; who would want to try to manage a production instance of Oracle in the cloud?
Categories: Amazon and its cloud, Cloud computing, Oracle | 7 Comments |
Traditional databases will eventually wind up in RAM
In January, 2010, I posited that it might be helpful to view data as being divided into three categories:
- Human/Tabular data –i.e., human-generated data that fits well into relational tables or arrays.
- Human/Nontabular data — i.e., all other data generated by humans.
- Machine-Generated data.
I won’t now stand by every nuance in that post, which may differ slightly from those in my more recent posts about machine-generated data and poly-structured databases. But one general idea is hard to dispute:
Traditional database data — records of human transactional activity, referred to as “Human/Tabular data above” — will not grow as fast as Moore’s Law makes computer chips cheaper.
And that point has a straightforward corollary, namely:
It will become ever more affordable to put traditional database data entirely into RAM. Read more
DB2 OLTP scale-out: pureScale
Tim Vincent of IBM talked me through DB2 pureScale Monday. IBM DB2 pureScale is a kind of shared-disk scale-out parallel OTLP DBMS, with some interesting twists. IBM’s scalability claims for pureScale, on a 90% read/10% write workload, include:
- 95% scalability up to 64 machines
- 90% scalability up to 88 machines
- 89% scalability up to 112 machines
- 84% scalability up to 128 machines
More precisely, those are counts of cluster “members,” but the recommended configuration is one member per operating system instance — i.e. one member per machine — for reasons of availability. In an 80% read/20% write workload, scalability is less — perhaps 90% scalability over 16 members.
Several elements are of IBM’s DB2 pureScale architecture are pretty straightforward:
- There are multiple pureScale members (machines), each with its own instance of DB2.
- There’s an RDMA (Remote Direct Memory Access) interconnect, perhaps InfiniBand. (The point of InfiniBand and other RDMA is that moving data doesn’t require interrupts, and hence doesn’t cost many CPU cycles.)
- The DB2 pureScale members share access to the database on a disk array.
- Each DB2 pureScale member has its own log, also on the disk array.
Something called GPFS (Global Parallel File System), which comes bundled with DB2, sits underneath all this. It’s all based on the mainframe technology IBM Parallel Sysplex.
The weirdest part (to me) of DB2 pureScale is something called the Global Cluster Facility, which runs on its own set of boxes. (Edit: Actually, see Tim Vincent’s comment below.) Read more
Categories: Cache, Clustering, IBM and DB2, OLTP, Oracle | 15 Comments |
Oracle on active-active replication
I am beginning to understand better some of the reasons that Oracle likes to review analyst publications before they go out. Notwithstanding what an Oracle executive told me Friday, I received an email from Irem Radzik of Oracle which said in part:
I am the product marketing director for Oracle GoldenGate product. We have noticed your blog post on Exadata covering a description for Active Data Guard. It refers to ADG being the “preferred way of Active-Active Oracle replication”.
I’d like to request correction on this comment as ADG does not have bidirectional replication capabilities which is required for Active-Active replication. GoldenGate is a complementary product to Active Data Guard with its bidirectional replication capabilities (as well as heterogeneous database support) and it is the preferred solution for Active-Active database replication.
Please note also a correction on product name spelling, notwithstanding that at least one Oracle person read the post before that, requested a different change, but didn’t notice that error.
Categories: Oracle | 6 Comments |
Oracle and IBM workload management
When last night’s Oracle/Exadata post got too long — and before I knew Oracle would request a different section be cut — I set aside my comments on Oracle’s workload management story to post separately. Elements of Oracle’s workload management story include:
- Oracle’s workload management product is called Oracle Database Resource Manager.
- Oracle Database Resource Manager has long managed CPU. For Exadata, Oracle added in management of I/O. Management of RAM is coming.
- Another aspect of Oracle workload management is “instance caging.” If you’re running multiple instances of Oracle on the same box – e.g. one with 128 cores and thus 256 threads – instance caging can keep an instance confined to a specific number of threads.
- Policies can let some classes of user get access to more threads in Oracle Parallel Query than others do.*
- Oracle offers a QoS (Quality of Service) layer, at least on Exadata, that tries to use Oracle’s workload management capabilities to enforce SLAs (Service Level Agreements). For example, if you want a certain query to always be answered in no more than 0.3 seconds, it tries to make that happen. However, this technology is new in the current Oracle release, and will be enhanced going forward.
*Recall that “degrees of parallelism” in Oracle Parallel Query can now be set automagically.
One reason I split out this discussion of workload management is that I also talked with IBM’s Tim Vincent yesterday, who added some insight to what I already wrote last August about DB2/InfoSphere Warehouse workload management. Specifically:
- DB2/InfoSphere Warehouse workload management has multiple ways to manage use of CPU resources.
- DB2/InfoSphere Warehouse workload management doesn’t directly manage consumption of I/O or RAM resources. However, it can influence usage of I/O or RAM by:
- Limiting the number or rows read or returned.
- Adjusting priorities as to which queries get to prefetch the most records.
- DB2/InfoSphere Warehouse workload management doesn’t allow you to directly set an SLA mandating query response time. However, if query response times exceed a target SLA, DB2/InfoSphere Warehouse workload management can cause a statistics dump that might help you tune your way out of the problem.
Categories: Data warehousing, IBM and DB2, Oracle, Workload management | Leave a Comment |