IBM and DB2
Analysis of IBM and various of its product lines in database management, analytics, and data integration.
- Cognos
- solidDB
- (in The Monash Report) Operational and strategic issues for IBM
- (in Text Technologies) IBM in the text analytics market
- (in Software Memories) Historical notes on IBM
- (in Software Memories) Historical notes on Informix
DB2 workload management
DB2 has added a lot of workload management features in recent releases. So when we talked Tuesday afternoon, Tim Vincent and I didn’t bother going through every one. Even so, we covered some interesting subjects in the area of DB2 workload management, including: Read more
Categories: Data warehousing, IBM and DB2, Netezza, Workload management | 3 Comments |
More on temp space, compression, and “random” I/O
My PhD was in a probability-related area of mathematics (game theory), so I tend to squirm when something is described as “random” that clearly is not. That said, a comment by Shilpa Lawande on our recent flash/temp space discussion suggests the following way of framing a key point:
- You really, really want to have multiple data streams coming out of temp space, as close to simultaneously as possible.
- The storage performance characteristics of such a workload are more reminiscent of “random” than “sequential” I/O.
If everybody else is cool with it too, I can live with that. 🙂
Meanwhile, I talked again with Tim Vincent of IBM this afternoon. Tim endorsed the temp space/Flash fit, but with a different emphasis, which upon review I find I don’t really understand. The idea is:
- Analytic DBMS processing generally stresses reads over writes.
- Temp space is an exception — read and write use of temp space is pretty balanced. (You spool data out once, you read it back in once, and that’s the end of that; next time it will be overwritten.)
My problem with that is: Flash typically has lower write than read IOPS (I/O per second), so being (relatively) write-intensive would, to a first approximation, seem if anything to disfavor a workload for flash.
On the plus side, I was reminded of something I should have noted when I wrote about DB2 compression before:
Much like Vertica, DB2 operates on compressed data all the way through, including in temp space.
Categories: Data warehousing, Database compression, IBM and DB2, Vertica Systems | 6 Comments |
ANTs Software CEO insults Sybase, claims migration success
Edit: ANTs Software seems to have subsequently collapsed, which may be why some of these links broke too.
Jeff Pryslak of Sybase put up a post insulting ANTs Software and the general idea of ANTs-aided Sybase-to-DB2 migration. CEO Joe Kozak of ANTs hit back with a rambling diatribe, which came to my attention because he mentioned my name in it, making some rather fanciful remarks about the “long” relationship I used to have with ANTs Software. (I do recall at least one briefing, plus some attempts from them to buy my services under the condition that I agree to a ridiculous NDA, which I refused to sign.)
This piqued my interest, so — recalling that ANTs is a public company — I decided to take a look at just how successful their software products business is. Well, for the quarter ended March 31, 2010, ANTs’ 10-Q filing says (emphasis mine): Read more
Categories: ANTs Software, Emulation, transparency, portability, IBM and DB2, Sybase | 9 Comments |
Flash is coming, well …
I really, really wanted to title this post “Flash is coming in a flash.” That seems a little exaggerated — but only a little.
- Netezza now intends to come out with a flash-based appliance earlier than it originally expected.
- Indeed, Netezza has suspended — by which I mean “scrapped” — prior plans for a RAM-heavy disk-based appliance. It will use a RAM/flash combo instead.*
- Tim Vincent of IBM told me that customers seem ready to adopt solid-state memory. One interesting comment he made is that Flash isn’t really all that much more expensive than high-end storage area networks.
Uptake of solid-state memory (i.e. flash) for analytic database processing will probably stay pretty low in 2010, but in 2011 it should be a notable (b)leading-edge technology, and it should get mainstreamed pretty quickly after that. Read more
Categories: Data integration and middleware, Data warehousing, IBM and DB2, Memory-centric data management, Netezza, Solid-state memory, Theory and architecture | 4 Comments |
What kinds of data warehouse load latency are practical?
I took advantage of my recent conversations with Netezza and IBM to discuss what kinds of data warehouse load latency were practical. In both cases I got the impression:
- Subsecond load latency is substantially impossible. Doing that amounts to OLTP.
- 5 seconds or so is doable with aggressive investment and tuning.
- Several minute load latency is pretty easy.
- 10-15 minute latency or longer is now very routine.
There’s generally a throughput/latency tradeoff, so if you want very low latency with good throughput, you may have to throw a lot of hardware at the problem.
I’d expect to hear similar things from any other vendor with reasonably mature analytic DBMS technology. Low-latency load is a problem for columnar systems, but both Vertica and ParAccel designed in workarounds from the getgo. Aster Data probably didn’t meet these criteria until Version 4.0, its old “frontline” positioning notwithstanding, but I think it does now.
Related link
-
Just what is your need for speed anyway?
Categories: Analytic technologies, Aster Data, Columnar database management, Data warehousing, IBM and DB2, Netezza, ParAccel, Vertica Systems | 4 Comments |
The Netezza and IBM DB2 approaches to compression
Thursday, I spent 3 ½ hours talking with 10 of Netezza’s more senior engineers. Friday, I talked for 1 ½ hours with IBM Fellow and DB2 Chief Architect Tim Vincent, and we agreed we needed at least 2 hours more. In both cases, the compression part of the discussion seems like a good candidate to split out into a separate post. So here goes.
When you sell a row-based DBMS, as Netezza and IBM do, there are a couple of approaches you can take to compression. First, you can compress the blocks of rows that your DBMS naturally stores. Second, you can compress the data in a column-aware way. Both Netezza and IBM have chosen completely column-oriented compression, with no block-based techniques entering the picture to my knowledge. But that’s about as far as the similarity between Netezza and IBM compression goes. Read more
Categories: Data warehousing, Database compression, IBM and DB2, Microsoft and SQL*Server, Netezza | 17 Comments |
Various quick notes
As you might imagine, there are a lot of blog posts I’d like to write I never seem to get around to, or things I’d like to comment on that I don’t want to bother ever writing a full post about. In some cases I just tweet a comment or link and leave it at that.
And it’s not going to get any better. Next week = the oft-postponed elder care trip. Then I’m back for a short week. Then I’m off on my quarterly visit to the SF area. Soon thereafter I’ve have a lot to do in connection with Enzee Universe. And at that point another month will have gone by.
Anyhow: Read more
Categories: Analytic technologies, Business intelligence, Data warehousing, Exadata, GIS and geospatial, Google, IBM and DB2, Netezza, Oracle, Parallelization, SAP AG, SAS Institute | 3 Comments |
IBM puts Cast Iron Systems out of its misery
Long ago, the first enterprise application integration (EAI) vendors offered pairwise integrations between different specific packaged applications. That was, for example what was going on at Katrina Garnett’s Crossworlds/Crossroads, which eventually became one of IBM’s first data integration software acquisitions. Years later, Cast Iron Systems tried what seemed to be pretty much the same thing, only better implemented. Recently, however, Cast Iron has been pretty hard to get a hold of, and I also couldn’t find anybody (competitor, friend of management, whatever) who believed Cast Iron was doing particularly well. So today’s news that IBM is acquiring Cast Iron Systems comes as no big surprise.
Categories: Cast Iron Systems, Data integration and middleware, EAI, EII, ETL, ELT, ETLT, IBM and DB2 | 1 Comment |
Thoughts on IBM’s anti-Oracle announcements
IBM is putting out a couple of press releases today that are obviously directed competitively at Oracle/Sun, and more specifically at Oracle’s Exadata-centric strategy. I haven’t been briefed, so I just have those to go on.
On the whole, the releases look pretty lame. Highlights seem to include:
- Maybe a claim of enhanced data compression.
- Otherwise, no obvious new technology except product packaging and bundling.
- Aggressive plans to throw capital at the Sun channel to convert it to selling IBM gear. (A figure of $1/2 billion is mentioned, for financing.
Disappointingly, IBM shows a lot of confusion between:
- Text data
- Machine-generated data such as that from sensors
While both highly important, those are very different things. IBM has not in the past shown much impressive technology in either of those two areas, and based on these releases, I presume that trend is continuing.
Edits:
I see from press coverage that at least one new IBM model has some Fusion I/O solid-state memory boards in it. Makes sense.
A Twitter hashtag has a number of observations from the event. Not much substance I could detect except various kind of Oracle bashing.
Categories: Database compression, Exadata, IBM and DB2, Oracle, Solid-state memory | 14 Comments |
Quick news, links, comments, etc.
Some notes based on what I’ve been reading recently: Read more