OLTP
Analysis of database management systems designed with a focus on OTLP (OnLine Transaction Processing) uses.
Traditional databases will eventually wind up in RAM
In January, 2010, I posited that it might be helpful to view data as being divided into three categories:
- Human/Tabular data –i.e., human-generated data that fits well into relational tables or arrays.
- Human/Nontabular data — i.e., all other data generated by humans.
- Machine-Generated data.
I won’t now stand by every nuance in that post, which may differ slightly from those in my more recent posts about machine-generated data and poly-structured databases. But one general idea is hard to dispute:
Traditional database data — records of human transactional activity, referred to as “Human/Tabular data above” — will not grow as fast as Moore’s Law makes computer chips cheaper.
And that point has a straightforward corollary, namely:
It will become ever more affordable to put traditional database data entirely into RAM. Read more
Object-oriented database management systems (OODBMS)
There seems to be a fair amount of confusion about object-oriented database management systems (OODBMS). Let’s start with a working definition:
An object-oriented database management system (OODBMS, but sometimes just called “object database”) is a DBMS that stores data in a logical model that is closely aligned with an application program’s object model. Of course, an OODBMS will have a physical data model optimized for the kinds of logical data model it expects.
If you’re guessing from that definition that there can be difficulties drawing boundaries between the application, the application programming language, the data manipulation language, and/or the DBMS — you’re right. Those difficulties have been a big factor in relegating OODBMS to being a relatively niche technology to date.
Examples of what I would call OODBMS include: Read more
Categories: Cache, In-memory DBMS, Intersystems and Cache', Memory-centric data management, Objectivity and Infinite Graph, OLTP, Software as a Service (SaaS), Starcounter | 20 Comments |
Starcounter high-speed memory-centric object-oriented DBMS, coming soon
Since posting recently about Starcounter, I’ve had the chance to actually talk with the company (twice). Hence I know more than before. 🙂 Starcounter:
- Has been around as a company since 2006.
- Has developed memory-centric object-oriented DBMS technology that has been OEMed by a few application software companies (especially in bricks-and-mortar retailing and in online advertising).
- Is planning to actually launch an OODBMS product sometime this summer.
- Has 14 employees (most or all of whom are in Sweden, which is also where I think Starcounter’s current customers are centered).
- Is planning to shift emphasis soon to the US market.
Starcounter’s value propositions are programming ease (no object/relational impedance mismatch) and performance. Starcounter believes its DBMS has 100X the performance of conventional DBMS at short-request transaction processing, and 10X the performance of other memory-centric and/or object-oriented DBMS (e.g. Oracle TimesTen, or Versant). That said, Starcounter has not yet tested VoltDB. Starcounter does not claim performance much beyond that of disk-based DBMS on analytic tasks such as aggregations.
The key technical aspect to Starcounter is integration between the DBMS and the virtual machine, so that the same copy of the data is accessed by both the DBMS and the application program, without any movement or transformation being needed. (Starcounter isn’t aware of any other object-oriented DBMS that work this way.) Transient and persistent data are handled in the same way, seamlessly.
Other Starcounter technical highlights include: Read more
Categories: Data models and architecture, In-memory DBMS, Memory-centric data management, Object, OLTP, Starcounter, Theory and architecture | 3 Comments |
DB2 OLTP scale-out: pureScale
Tim Vincent of IBM talked me through DB2 pureScale Monday. IBM DB2 pureScale is a kind of shared-disk scale-out parallel OTLP DBMS, with some interesting twists. IBM’s scalability claims for pureScale, on a 90% read/10% write workload, include:
- 95% scalability up to 64 machines
- 90% scalability up to 88 machines
- 89% scalability up to 112 machines
- 84% scalability up to 128 machines
More precisely, those are counts of cluster “members,” but the recommended configuration is one member per operating system instance — i.e. one member per machine — for reasons of availability. In an 80% read/20% write workload, scalability is less — perhaps 90% scalability over 16 members.
Several elements are of IBM’s DB2 pureScale architecture are pretty straightforward:
- There are multiple pureScale members (machines), each with its own instance of DB2.
- There’s an RDMA (Remote Direct Memory Access) interconnect, perhaps InfiniBand. (The point of InfiniBand and other RDMA is that moving data doesn’t require interrupts, and hence doesn’t cost many CPU cycles.)
- The DB2 pureScale members share access to the database on a disk array.
- Each DB2 pureScale member has its own log, also on the disk array.
Something called GPFS (Global Parallel File System), which comes bundled with DB2, sits underneath all this. It’s all based on the mainframe technology IBM Parallel Sysplex.
The weirdest part (to me) of DB2 pureScale is something called the Global Cluster Facility, which runs on its own set of boxes. (Edit: Actually, see Tim Vincent’s comment below.) Read more
Categories: Cache, Clustering, IBM and DB2, OLTP, Oracle | 15 Comments |
Oracle and Exadata: Business and technical notes
Last Friday I stopped by Oracle for my first conversation since January, 2010, in this case for a chat with Andy Mendelsohn, Mark Townsend, Tim Shetler, and George Lumpkin, covering Exadata and the Oracle DBMS. Key points included: Read more
Short-request and analytic processing
A few years ago, I suggested that database workloads could be divided into two kinds — transactional and analytic. The advent of non-transactional NoSQL has suggested that we need a replacement term for “transactional” or “OLTP”, but finding one has been a bit difficult. Numerous tries, including high-volume simple processing, online request processing, internet request processing, network request processing, short request processing, and rapid request processing have turned out to be imperfect, as per discussion at each of those links. But then, no category name is ever perfect anyway. I’ve finally settled on short request processing, largely because I think it does a good job of preserving the analytic-vs-bang-bang-not-analytic workload distinction.
The easy part of the distinction goes roughly like this:
- Anything transactional or “OLTP” is short-request.
- Anything “OLAP” is analytic.
- Updates of small amounts of data are probably short-request, be they transactional or not.
- Retrievals of one or a few records in the ordinary course of update-intensive processing are probably short-request.
- Queries that return or aggregate large amounts of data — even in intermediate result sets — are probably analytic.
- Queries that would take a long time to run on badly-chosen or -configured DBMS are probably analytic (even if they run nice and fast on your actual system).
- Analytic processes that go beyond querying or simple arithmetic are — you guessed it! — analytic.
- Anything expressed in MDX is probably analytic.
- Driving a dashboard is usually analytic.
Where the terminology gets more difficult is in a few areas of what one might call real-time or near-real-time analytics. My first takes are: Read more
Categories: Analytic technologies, Data warehousing, MySQL, NoSQL, OLTP | 33 Comments |
How about “Short Request Processing”?
While my other terminology posts seem to have gone pretty well, the Internet Request Processing name is proving a bit problematic. People seem pretty cool with the “request processing” part, but there are issues with the modifier, including:
- “Internet” doesn’t really cover everything.
- “Network” in practice sounds too low-level, and is also too general.
- “Online” is also too general.
So how about just going with “short”? OLTP requests are inherently short. “GET” and “SET” are certainly short. 🙂 In general, queries that do not involve JOINs are probably short requests. Analytic queries, however, are generally not short. Even better, all that can apply to the syntax and the execution time alike. 🙂
Please note that I’m focused more here on describing use cases than products. Whether products generally used to do one kind of thing can also be stretched to do another — e.g., complex analytics hardwired into a Cassandra application — is not my primary concern.
Categories: NoSQL, OLTP | 11 Comments |
Terminology: Internet Request Processing (IRP)
As I observed previously, we need a term that means “like OLTP but not necessarily transactional”, to help describe a category of use cases that can reasonably be addressed by NoSQL or scale-out SQL systems alike.* So here’s a candidate phrase: Internet Request Processing (IRP). If we use that, I’ll call Schooner, Cassandra, Couchbase , et al. IRP DBMS, while other people will probably call them IRP databases.
*Consider, for example, the overlapping use cases for Schooner, dbShards, ScaleBase, Couchbase, and DataStax/Cassandra.
In my proposed terminology, an internet request processing (IRP) use case is one in which: Read more
Categories: NoSQL, OLTP | 8 Comments |
Schooner — flash-based, now software-only, and very fast
Last October I wrote about Schooner Information Technology, which made flash-based appliances, for MySQL, memcached, or persistent memcached. Schooner sold those appliances to close to 20 customers, but even so decided software-only was a better way to go.
Schooner’s core value proposition is that one Schooner box with flash does the job of a lot of MySQL or NoSQL boxes with hard drives. Highlights of the Schooner story — of which you can find more detail at the Schooner website — now include: Read more
Categories: Clustering, memcached, MySQL, OLTP, Schooner Information Technology, Solid-state memory | 4 Comments |
ScaleBase, another MPP OLTP quasi-DBMS
Liran Zelkha of ScaleBase raised his hand on Twitter. It turns out ScaleBase has a story rather similar to that of CodeFutures/dbShards. That is:
- Like dbShards, ScaleBase is a proxy that looks to the application like a scale-out DBMS, but routes work to multiple servers running MySQL against different shards of the database. Other DBMS beyond MySQL are planned, but PostgreSQL — which dbShards supports — did not get mentioned.
- Sharding is done at configuration time, and is transparent to the application. You want to shard the big tables and replicate the small ones, because if you join two sharded tables, performance can be slow. ScaleBase may have more of a configuration-advisor wizard than dbShards does.
- Each shard is replicated to a mirror, in a high-availability way.
- You can use ScaleBase across multiple data centers, but there’s little or no magic to overcome the performance issues that would arise in many use cases.
- Much like dbShards, ScaleBase supports three kinds of sharding — hash, list, and range.
- ScaleBase currently has no support whatsoever for stored procedures, which is slightly less than dbShards has.
- Liran stresses that ScaleBase looks even to management tools — e.g. TOAD — like a single DBMS.
- ScaleBase runs on EC2 and private cloud.
Our talk didn’t get deeply technical, and I don’t know exactly how ScaleBase’s replication works. But a website reference to a small transaction log in a distributed cache does sound, while not identical to the dbShards approach, at least directionally similar.
ScaleBase is a year or so old, with about 6 people, based in the Boston area despite strong Israeli roots. ScaleBase has raised a round of venture capital; I didn’t ask for details.
Liran says that ScaleBase is in closed beta, with some production users, at least one of whom has over 100 database servers.
Categories: Clustering, dbShards and CodeFutures, MySQL, OLTP, Parallelization, ScaleBase, Transparent sharding | 10 Comments |