Application areas

Posts focusing on the use of database and analytic technologies in specific application domains. Related subjects include:

May 23, 2008

Data warehouse appliance power user TEOCO

If you had to name super-high-end users of data warehouse technology, your list might start with a few retailers, credit data processors, and telcos, plus the US intelligence establishment. Well, it turns out that TEOCO runs outsourced data warehouses for several of the top US telcos, making it one of the top data warehouse technology users around.

A few weeks ago, I had a fascinating chat with John Devolites of TEOCO. Highlights included:

May 8, 2008

Outsourced data marts

Call me slow on the uptake if you like, but it’s finally dawned on me that outsourced data marts are a nontrivial segment of the analytics business. For example:

To a first approximation, here’s what I think is going on. Read more

April 29, 2008

Truviso and EnterpriseDB blend event processing with ordinary database management

Truviso and EnterpriseDB announced today that there’s a Truviso “blade” for Postgres Plus. By email, EnterpriseDB Bob Zurek endorsed my tentative summary of what this means technically, namely:

  • There’s data being managed transactionally by EnterpriseDB.

  • Truviso’s DML has all along included ways to talk to a persistent Postgres data store.

  • If, in addition, one wants to do stream processing things on the same data, that’s now possible, using Truviso’s usual DML.

Read more

April 24, 2008

Optimizing WordPress database usage

There’s an amazingly long comment thread on Coding Horror about WordPress optimization. Key points and debates include:

Another theme is — well, it’s WordPress “theme” design. Do you really need all those calls? The most dramatic example I can think of one I experienced soon after I started this blog. Some themes have the cool feature that, in the category list on the sidebar, there’s a count of the number of posts in the category. Each category. I love that feature, but its performance consequences are not pretty.

As previously noted, we’ll be doing an emergency site upgrade ASAP. Once we’re upgraded to WordPress 2.5, I hope to deploy a rich set of back-end plug-ins. One of the caching ones will be among them.

March 14, 2008

The core challenges of OLTP are changing

I wrote a few weeks ago about the H-Store project, which rejects a variety of assumptions underlying traditional OLTP database design. One of these is long transactions over open database connections. The idea is that the most demanding OLTP applications run on the Web, where abandonment is common, and hence the only sensible option is to break things up into simple chunks. Read more

February 7, 2008

Vertica update

I chatted with Andy Ellicott and Mike Stonebraker of Vertica today. Some of the content is embargoed until February 19 (for TDWI), but here are some highlights of the rest.

We also addressed the subject of Vertica’s schema assumptions, but I’ll leave that to another post.

January 24, 2008

Is MapReduce a good underpinning for next-gen scientific DBMS?

Back in November, Mike Stonebraker suggested that there’s a need for database management advances to serve “big science”. He said:

Obviously, the best solution to these … problems would be to put everything in a next-generation DBMS — one capable of keeping track of data, metadata, and lineage. Supporting the latter would require all operations on the data to be done inside the DBMS with user-defined functions — Postgres-style.

Read more

November 27, 2007

One of the funniest fake press releases ever

About an extended outage in Lord of the Rings Online.

Edited July 2, 2008: New URL that works at least for now.

October 19, 2007

One Greenplum customer — 35 terabytes and growing fast

I was at the Business Objects conference this week, and as usual went to very few sessions. But one I did stroll into was on “Managing Rapid Growth With the Right BI Strategy.” This was by Reliance Telecommunications, an outfit in India that is adding telecom subscribers very quickly, and consequently banging 100-150 gigs of data per day into a 35 terabyte warehouse.

The beginning of the talk astonished me, as the presenter seemed to be saying they were doing all this on Oracle. Hah. Oracle is what they moved away from; instead, they got Greenplum. I couldn’t get details; indeed, as a BI guy he was far enough away from DBMS to misspeak and say that Greenplum was brought in by ‘HP’, before quickly correcting himself when prompted. Read more

August 10, 2007

Applications for super-low-latency CEP

Complex event/stream processing vendors compete fiercely on the basis of low latency, down to the single-digit number of milliseconds, or even sub-millisecond levels. A question naturally springs to mind: When does this extreme low latency matter?

I think I’ve come up with a concise yet fairly accurate answer: Super-low latency matters when the application includes direct competition against a similarly fast opponent. The best example is automated stock trading – if you can exploit a market inefficiency 1 millisecond before your competition, you make money.

Other examples might arise in network security or battlefield systems, but I don’t know of any specific real-life cases. Instead, other applications for complex event/stream processing tend to be content with latencies that are easier to achieve. E.g., 100 milliseconds (1/10 of second) is likely to be plenty fast enough.

← Previous PageNext Page →

Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

Login

Search our blogs and white papers

Monash Research blogs

User consulting

Building a short list? Refining your strategic plan? We can help.

Vendor advisory

We tell vendors what's happening -- and, more important, what they should do about it.

Monash Research highlights

Learn about white papers, webcasts, and blog highlights, by RSS or email.