GIS and geospatial
Analysis of data management technology optimized for geospatial data, whether by specialized indexing or user-defined functions
Teradata 13 focuses on advanced analytic performance
Last October I wrote about the Teradata 13 release of Teradata’s database management software. Teradata 13, which will be used across the various Teradata product lines, has now been announced for GCA (General Customer Availability)*. So far as I can tell, there were two main points of emphasis for Teradata 13:
- Performance (of course, performance is a point of emphasis for almost any release of any analytic DBMS product), especially but not only in the areas of aggregates, ETL (Extract/Transform/Load), and UDFs.
- UDFs (User Defined Functions), especially but not only in the areas of data mining and geospatial analysis.
To put it even more concisely, the focus of Teradata 13 is on advanced analytic performance, although there of course are some enhancements in simple query performance and in analytic functionality as well. Read more
Teradata Developer Exchange (DevX) begins to emerge
Every vendor needs developer-facing web resources, and Teradata turns out to have been working on a new umbrella site for its. It’s called Teradata Developer Exchange — DevX for short. Teradata DevX seems to be in a low-volume beta now, with a press release/bigger roll-out coming next week or so. Major elements are about what one would expect:
- Articles
- Blogs
- Downloads
- Surprisingly, so far as I can tell, no forums
If you’re a Teradata user, you absolutely should check out Teradata DevX. If you just research Teradata — my situation 🙂 — there are some aspects that might be of interest anyway. In particular, I found Teradata’s downloads instructive, most particularly those in the area of extensibility. Mainly, these are UDFs (User-Defined Functions), in areas such as:
- Compression
- Geospatial data
- Imitating Oracle or DB2 UDFs (as migration aids)
Also of potential interest is a custom-portlet framework for Teradata’s management tool Viewpoint. A straightforward use would be to plunk some Viewpoint data into a more general system management dashboard. A yet cooler use — and I couldn’t get a clear sense of whether anybody’s ever done this yet — would be to offer end users some insight as to how long their queries are apt to run.
Categories: Database compression, Emulation, transparency, portability, GIS and geospatial, Teradata | 2 Comments |
IBM’s Oracle emulation strategy reconsidered
I’ve now had a chance to talk with IBM about its recently-announced Oracle emulation strategy for DB2. (This is for DB2 9.7, which I gather has been quasi-announced in April, will be re-announced in May, and will be re-re-announced as being in general availability in June.)
Key points include:
- This really is more like Oracle emulation than it is transparency, a term I carelessly used before.
- IBM’s Oracle emulation effort is focused on two technological goals:
- Making it easy for an Oracle application to be ported to DB2.
- Making it easy for an Oracle developer to develop for DB2.
- The initial target market for DB2’s Oracle emulation is ISVs (Independent Software Vendors) much more than it is enterprises. IBM suggested there were a couple hundred early adopters, and those are primarily in the ISV area.
Because of Oracle’s market share, many ISVs focus on Oracle as the underlying database management system for their applications, whether or not they actually resell it along with their own software. IBM proposed three reasons why such ISVs might want to support DB2: Read more
More Oracle notes
When I went to Oracle in October, the main purpose of the visit was to discuss Exadata. And so my initial post based on the visit was focused accordingly. But there were a number of other interesting points I’ve never gotten around to writing up. Let me now remedy that, at least in part. Read more
Teradata Geospatial, and datatype extensibility in general
As part of it’s 13.0 release this week, Teradata is productizing its geospatial datatype, which previously was just a downloadable library. (Edit: More precisely, Teradata announced 13.0, which will actually be shipped some time in 2009.) What Teradata Geospatial now amounts to is:
- User-defined functions (UDF) written by Teradata (this is the part that existed before).
- (Possibly new) Enhanced implementations of the Teradata geospatial UDFs, for better performance.
- (Definitely new) Optimizer awareness of the Teradata geospatial UDFs.
Teradata also intends in the future to implement actual geospatial indexing; candidates include r-trees and tesselation.
Hearing this was a good wake-up call for me, because in the past I’ve conflated two issues on datatype extensibility, namely:
- Whether the query executer uses a special access method (i.e., index type) for the datatype
- Whether the optimizer is aware of the datatypes.
But as Teradata just pointed out, those two issues can indeed be separated from each other.
Categories: Data types, Data warehousing, GIS and geospatial, Teradata | 1 Comment |
Netezza and Teradata on analytic geospatial data management
Geospatial data management is one of the flavors of the month:
- Last week, Teradata claimed it has the most sophisticated analytic geospatial data management capability.
- Also last week, Netezza’s newly acquired Netezza Spatial technology attracted a lot of attention.
- This week, Oracle called attention to its geospatial capabilities.
So I asked Netezza and Teradata what this geospatial analytics stuff is all about. Read more
Categories: Analytic technologies, Data warehousing, GIS and geospatial, Netezza, Teradata | 3 Comments |
Oracle spotlights its datatype support
Oracle put out a flurry of press releases today in conjunction with Oracle OpenWorld. One, which was simply positioned as a report on some “mission-critical” customer apps, caught my eye because all four detailed examples involved nonstandard datatypes:
- Two Oracle Spatial
- One “semantic,” which in Oracle lingo seems to mean — you guessed it — RDF
- One DICOM, which seems to be a medical imaging datatype.
Categories: Data types, GIS and geospatial, Oracle, RDF and graphs | 3 Comments |
Peter Batty on Netezza Spatial
As previously noted, I’m not up to speed on Netezza Spatial. Phil Francisco of Netezza has promised we’ll fix that ASAP. In the mean time, I found a blog by a guy named Peter Batty, who evidently:
- Knows a lot about geospatial data and its uses
- Is consulting to Netezza
- Is smart
Batty offers a lot of detail in two recent posts, intermixed with some gollygeewhiz about Netezza in general. If you’re interested in this stuff, Batty’s blog is well worth checking out. Read more
Categories: Analytic technologies, Data warehousing, GIS and geospatial, Netezza, Telecommunications | 2 Comments |
Teradata decides to compete head-on as a data warehouse appliance vendor
In a press release today that is surely timed to impinge on the Netezza user conference news cycle, Teradata has come out swinging. Highlights include:
- Teradata, which long avoided the “appliance” term, now says it sells both “data warehouse appliances” and “data mart appliances.” Indeed, it claims to have “invented the original appliance” — which is pretty close to being true.*
- Teradata claims its “new appliance easily delivers up to 5 to 10 times performance improvement over competitors’ appliances,” at $119,000 per terabyte US list price.
- Teradata claims a 150% faster “scan rate” than competitors. Teradata is surely thinking of Netezza when saying that.
- Teradata claims 10X performance improvement on “selected queries” vs. the “competition.”
- Teradata thinks its geospatial data management capability is better than competitors’, and that this is an important indicator of Teradata’s general overall greater sophistication.
Categories: Analytic technologies, Data warehouse appliances, Data warehousing, GIS and geospatial, Netezza, Teradata | 4 Comments |
McObject eXtremeDB — a solidDB alternative
McObject — vendor of memory-centric DBMS eXtremeDB — is a tiny, tiny company, without a development team of the size one would think needed to turn out one or more highly-reliable DBMS. So I haven’t spent a lot of time thinking about whether it’s a serious alternative to solidDB for embedded DBMS, e.g. in telecom equipment. However:
- IBM’s acquisition of Solid seems to suggest a focus on DB2 caching rather than the embedded market
- McObject actually has built up something of a customer list, as per the boilerplate on any of its press releases.
And they do seem to have some nice features, including Patricia tries (like solidDB), R-trees (for geospatial), and some kind of hybrid disk-centric/memory-centric operation.