DBMS product categories
Analysis of database management technology in specific product categories. Related subjects include:
Intersystems Cache’ highlights
I talked with Robert Nagle of Intersystems last week, and it went better than at least one other Intersystems briefing I’ve had. Intersystems’ main product is Cache’, an object-oriented DBMS introduced in 1997 (before that Intersystems was focused on the fourth-generation programming language M, renamed from MUMPS). Unlike most other OODBMS, Cache’ is used for a lot of stuff one would think an RDBMS would be used for, across all sorts of industries. That said, there’s a distinct health-care focus to Intersystems, in that:
- MUMPS, the original Intersystems technology, was focused on health care.
- The reasons Intersystems went object-oriented have a lot to do with the structure of health-care records.
- Intersystems’ biggest and most visible ISVs are in the health-care area.
- Intersystems is actually beginning to sell an electronic health records system called TrakCare around the world (but not in the US, where it has lots of large competitive VARs).
Note: Intersystems Cache’ is sold mainly through VARs (Value-Added Resellers), aka ISVs/OEMs. I.e., it’s sold by people who write applications on top of it.
So far as I understand – and this is still pretty vague and apt to be partially erroneous – the Intersystems Cache’ technical story goes something like this: Read more
Categories: Data models and architecture, Emulation, transparency, portability, Health care, Intersystems and Cache', Mid-range, Object, OLTP, Sybase, Theory and architecture | 8 Comments |
This and that
I have various subjects backed up that I don’t really want to write about at traditional blog-post length. Here are a few of them. Read more
Oracle lifts the cloud hanging over MySQL storage engine vendors
Oracle has put out a press release promising to play nicely with MySQL if its Sun takeover is approved. The parts in italics below are quotes. My comments are in plain text.
1. Continued Availability of Storage Engine APIs. Oracle shall maintain and periodically enhance MySQL’s Pluggable Storage Engine Architecture to allow users the flexibility to choose from a portfolio of native and third party supplied storage engines.
MySQL’s Pluggable Storage Engine Architecture shall mean MySQL’s current practice of using, publicly-available, documented application programming interfaces to allow storage engine vendors to “plug” into the MySQL database server. Documentation shall be consistent with the documentation currently provided by Sun.
Well, duh.
2. Non-assertion. As copyright holder, Oracle will change Sun’s current policy and shall not assert or threaten to assert against anyone that a third party vendor’s implementations of storage engines must be released under the GPL because they have implemented the application programming interfaces available as part of MySQL’s Pluggable Storage Engine Architecture.
A commercial license will not be required by Oracle from third party storage engine vendors in order to implement the application programming interfaces available as part of MySQL’s Pluggable Storage Engine Architecture.
Oracle shall reproduce this commitment in contractual commitments to storage vendors who at present have a commercial license with Sun.
This is the biggie, lifting a major cloud from the MySQL storage engine business. It sounds like the third of four options I suggested as to how Oracle could legitimately earn antitrust approval of its MySQL takeover. Sure, Infobright, Kickfire, et al. already had what they saw as adequate safeguards or contingency plans vs. Oracle skullduggery. It’s still big even so.
(Quoted out of order.) The geographic scope of these commitments shall be worldwide and these commitments shall continue until the fifth anniversary of the closing of the transaction.
Not a disaster, but with respect to at least point #2 there should be no time limit whatsoever. I’d like to see the EC require that change as a further Oracle concession. Read more
Categories: MySQL, Open source, Oracle, Pricing | 17 Comments |
Notes on RainStor, the company formerly known as Clearpace
Information preservation* DBMS vendor Clearpace officially changed its name to RainStor this week. RainStor is also relocating its CEO John Bantleman and more generally its headquarters to San Francisco. This all led to a visit with John and his colleague Ramon Chen, highlights of which included: Read more
Categories: Archiving and information preservation, Market share and customer counts, Oracle, Rainstor, SenSage, Telecommunications | 1 Comment |
Comments on a fabricated press release quote
My clients at Kickfire put out a press release last week quoting me as saying things I neither said nor believe. The press release is about a “Queen For A Day” kind of contest announced way back in April, in which users were invited to submit stories of their data warehouse problems, with the biggest sob stories winning free Kickfire appliances. The fabricated “quote” reads: Read more
Categories: About this blog, Data warehouse appliances, Data warehousing, Kickfire, Market share and customer counts, Sybase | 3 Comments |
Boston Big Data Summit keynote outline
Last month, Bob Zurek asked me to give a talk on “Big Data”, where “big” is anything from a few terabytes on up, then moderate a panel on cloud computing. We agreed that I could talk just from notes, without slides. So, since I have them typed up, I’m posting them below.
Calpont’s InfiniDB
Since its inception, Calpont has gone through multiple management teams, strategies, and investor groups. What it hadn’t done, ever, is actually shipped a product. Last week, however, Calpont introduced a free/open source DBMS, InfiniDB, with technical details somewhat reminiscent of what Calpont was promising last April. Highlights include:
- Like Infobright, Calpont’s InfiniDB is a columnar DBMS consisting of a MySQL front end and a columnar storage engine.
- Community edition InfiniDB runs on a single server.
- One of commercial/enterprise edition InfiniDB’s main claims to fame will be MPP support.
- There’s no announced time frame for commercial edition InfiniDB.
- InfiniDB’s current compression story is dictionary/token only, with decompression occurring before joins are executed. Improvement is a roadmap item.
- Indeed, InfiniDB has many roadmap items, a few of which can be found here. Also, a great overview of InfiniDB’s current state and roadmap can be found in this MySQL Performance Blog thread. (And follow the links there to find performance discussions of other free analytic DBMS.)
- One thing InfiniDB already has that is still a roadmap item for Infobright is the ability to run a query across multiple cores at once.
- One thing free InfiniDB has that Infobright only offers in its Enterprise Edition is ACID-compliant Insert/Update/Delete. (Note: I wish people would stop saying that Infobright Enterprise Edition isn’t ACID-compliant, since that point was cleared up a while ago.)
- InfiniDB has no indexes or materialized views.
- However, InfiniDB’s retrieval is expedited by something called “Extents,” which sounds a lot like Netezza’s zone maps.
Being on vacation, I’ll stop there for now. (If it weren’t for Tropical Storm/ depression Ida, I might not even be posting this much until I get back.)
Categories: Analytic technologies, Calpont, Columnar database management, Data warehousing, Database compression, Infobright, MySQL, Open source | 3 Comments |
Teradata hardware strategy and tactics
In my opinion, the most important takeaways about Teradata’s hardware strategy from the Teradata Partners conference last week are:
- Teradata’s future lies in solid-state memory. That’s in line with what Carson Schmidt told me six months ago.
- To Teradata’s surprise, the solid-state future is imminent. Teradata is 6-9 months further along with solid-state drives (SSD) than it thought a year ago it would be at this point.
- Short-term, Teradata is going to increase the number of appliance kinds it sells. I didn’t actually get details on anything but the new SSD-based Blurr, but it seems there will be others as well.
- Teradata’s eventual future is to mix and match parts (especially different kinds of storage) in a more modular product line. Teradata Virtual Storage is of pretty limited value otherwise. I probably believe Teradata will go modular more emphatically than Teradata itself does, because I think doing so will meet users needs more effectively than if Teradata relies strictly on fixed appliance configurations.
In addition, some non-SSD componentry tidbits from Carson Schmidt include:
- Teradata really likes Intel’s Nehalem CPUs, with special reference to multi-threading, QuickPath interconnect, and integrated memory controller. Obviously, Nehalem-based Teradata boxes should be expected in the not too distant future.
- Teradata really likes Nehalem’s successor Westmere too, and expects to be pretty fast to market with it (faster than with Nehalem) because Nehalem and Westmere are plug-compatible in motherboards.
- Teradata will go to 10-gigabit Ethernet for external connectivity on all its equipment, which should improve load performance.
- Teradata will also go to 10-gigabit Ethernet to play the Bynet role on appliances. Tests are indicating this improves query performance.
- What’s more, Teradata believes there will be no practical scale-out limitations with 10-gigabit Ethernet.
- Teradata hasn’t decided yet what to do about 2.5” SFF (Small Form Factor) disk drives, but is leaning favorably. Benefits would include lower power consumption and smaller cabinets.
- Also on Carson’s list of “exciting” future technologies is SAS 2.0, which at 6 gigabits/second doubles the I/O bandwidth of SAS 1.0.
- Carson is even excited about removing universal power supplies from the cabinets, increasing space for other components.
- Teradata picked Intel’s Host Bus Adapters for 10-gigabit Ethernet. The switch supplier hasn’t been determined yet.
Let’s get back now to SSDs, because over the next few years they’re the potential game-changer. Read more
Categories: Data warehouse appliances, Data warehousing, Solid-state memory, Storage, Teradata | 13 Comments |
Reports of perfectly-balanced hardware configurations are greatly exaggerated
Data warehouse appliance and software appliance vendors like to claim that they’ve worked out just the right hardware configuration(s), and that a single configuration is correct for a fairly broad range of workloads. But there are a lot of reasons to be dubious about that. Specific vendor evidence includes:
- Teradata ascribes considerable importance to a Virtual Storage technology whose main purpose is to allow mixing of heterogeneous storage devices in a single system. And the discussion rarely suggests that these parts will be in a rigid fixed relationship.
- Netezza — as Teradata keeps reminding me — often sells boxes with the expectation that they won’t be filled with data, so as to increase spindle count and hence performance.
- Oracle/Sun have dropped some comments about Exadata being more flexibly configured going forward.
- Kickfire’s new “high-end” appliance lets you attach fairly arbitrary amounts of external storage.
- And of course, software-only analytic DBMS vendors run their software in all sorts of hardware and storage environments.
What’s more, the claim never made a lot of sense anyway. With the rarest of exceptions, even a single data warehouse’s workload will contain different queries that strain different parts of the system in different ratios. Calculating the “ideal” hardware configuration for that single workload would be forbiddingly difficult. And even if one could calculate it, it almost surely would be different than another user’s “ideal” configuration. How a single hardware configuration can be “ideally balanced” for a broad class of use cases boggles the imagination.
Categories: Data warehouse appliances, Data warehousing, Exadata, Kickfire, Netezza, Oracle, Teradata | 6 Comments |
Greenplum Single-Node Edition — sometimes free is a real cool price
Greenplum is announcing today that you can run Greenplum software on a single 8-core commodity server, free. First and foremost, that’s a strong statement that Greenplum wants enterprises to pay it for Greenplum’s parallelization/”private cloud” capabilities. Second, it may be an attractive gift to a variety of folks who want to extract insight from terabyte-scale databases of various kinds.
Greenplum Single-Node Edition:
- Is free of charge, although you can buy support.
- Has no restrictions on use, production or otherwise.
- Has no restrictions on database size.
- Is closed-source.
For those who want free, terabyte-scale data warehousing software, Greenplum Single-Node Edition may be quite appealing, considering that the main available alternatives are:
- General-purpose open-source DBMS, such as PostgreSQL and MySQL (lacking analytic DBMS performance and features)
- Infobright Community Edition (the other best choice – Infobright’s commercial sales success indicates the solidity of Infobright’s technology)
- Rough research-project code and other other questionable open source offerings
- Crippleware from other commercial analytic DBMS vendors (e.g., Teradata)
For example, comparing PostgreSQL-based Greenplum with PostgreSQL itself, Greenplum offers:
- The ability to scale out queries across all cores in your box (and no, pgpool is not a serious alternative)
- Storage alternatives such as columnar (I am told that EnterpriseDB recently stopped funding a project for a PostgreSQL columnar option)