October 18, 2009
Introduction to SenSage
I visited with SenSage on my two most recent trips to San Francisco. Both visits were, through no fault of SenSage’s, hasty. Still, I think I have enough of a handle on SenSage basics to be worth writing up.
General SenSage highlights include:
- SenSage used to be known as Addamark.
- SenSage used to characterize itself as being in the Security Information Management (SIM) market.
- Now SenSage characterizes itself (approximately) as selling technology built around a columnar DBMS that happens to be pretty good at log analysis, compliance, and/or archiving.
- More concisely, SenSage says it is in the event data warehouse category. (The same could arguably be said of Splunk.)
- SenSage says it has >400 paying customers, of which ~200 are direct.
- SenSage has >120 employees and, like Splunk, is profitable.
- SenSage has enjoyed >50% annual revenue growth the past four years.
- Some SenSage deals are in the multiple-million dollar range.
- A major SenSage channel partner – dozens of installations — is SAP, which resells SenSage software on HP hardware is a “Compliance Log Warehouse.”
- A hot market for SenSage is CDRs (Call Detail Records).
- SenSage says that, among analytic DBMS vendors, it competes with Oracle, IBM, Teradata, Netezza and, to some extent, Vertica and Greenplum.
Technical SenSage highlights include:
- SenSage’s core technology is an append-only columnar DBMS, with no master node.
- SenSage’s DBMS uses no indexes and requires “no” database administration.
- SenSage’s database is range-partitioned, with the range-partition key always being time.
- SenSage has something it calls SQO (Sparse Query Optimization), which sounds a lot like Netezza zone maps. SQO never yields a false negative on whether data is in a block, never yields a false positive on equality predicates, and only rarely yields a false positive on range predicates.
- SenSage’s database uses large block sizes – typically 250,000 records/block, at 200-250 bytes per record. (That’s in the range of 64 megabytes/block.)
- SenSage says its software can load 10-50,000 records/second/node. If I’m doing the arithmetic correctly, that’s roughly 7-40 gigabytes/node/hour.
- SenSage collects log data into its event data warehouse in what it characterizes as an agentless manner. Even so, it seems that for a majority of kinds of data sources one does have to write custom agents. The two other ways to get data into SenSage – and presumably most of the data volume comes through these – are:
- File transfer in the usual way
- syslog
- SenSage says its software can read 100s of data sources, and that this is a huge competitive advantage. I’m not totally sure how that jibes with the prior point.
- SenSage says it gets 5X compression on CDR data, 10-20X on other kinds of logs. That’s not too far off from Vertica’s compression figures.
- SenSage says that it has datatype-aware compression as well as more standard stuff, with VARCHAR compressing particularly well.
- In particular, SenSage uses both dictionary/token and delta compression.
- SenSage’s software is pretty agnostic with respect to storage kind – DAS (Direct Attached Storage), SAN (Storage-Area Network), or content-addressable. In particular, there’s only about a 4% performance hit for using content-addressable storage.
- When using WORM (Write Once Read Many) storage like EMC’s Centera, SenSage leaves record locator information behind on ordinary storage and otherwise queries the WORM storage just like it queries anything else.
- SenSage says it has been using MapReduce since “Day 1”.
- Probably not coincidentally, you can use Perl and other aggregates in SenSage SQL statements.
- Perhaps also not coincidentally, SenSage says it has a number of advanced built-in analytic functions, including some focused on sessionization.
In addition to all that, SenSage offers a built-in event processing engine, consisting of:
- A finite-state machine correlation engine.
- A proprietary event processing language.
- A GUI to “abstract” (i.e., generate?) the event processing language.
The SenSage event processing engine is used to generate alerts. Data that comes into SenSage actually is passed to two places at once, namely to both the event processing engine and the database itself.
Categories: Analytic technologies, Columnar database management, Data warehousing, Database compression, Log analysis, MapReduce, SenSage, Streaming and complex event processing (CEP), Telecommunications
Subscribe to our complete feed!
Comments
3 Responses to “Introduction to SenSage”
Leave a Reply
Curt, thanks for your post on SenSage.
Here are a few updates for you and your readers…
• SenSage offers Security Information and Event Management (SIEM), Log Management, and Governance, Risk and Compliance (GRC) solutions built atop an event data warehouse that provides scalability, query flexibility and TCO advantages over traditional competitors in the security market.
• A major SenSage channel partner is HP, which resells SenSage software on HP hardware as the “Compliance Log Warehouse” appliance.
• A hot market for SenSage is CDR (Call Detail Record) Retention and Retrieval. We have won some of our biggest deals against analytic DBMS competitors Oracle, IBM, Teradata, Netezza and, to some extent, Vertica and Greenplum.
• SenSage also has an alliance and integration with SAP, that enables continuous monitoring and compensating controls on all SAP transactions and security audit logs.
@Danny,
Did I conflate your HP and SAP deals incorrectly?
Thanks,
CAM
I don’t think that you can compare SenSage to Splunk. Splunk is a much larger company with 2x revenues. SenSage has never turned a profit after 5 rounds of funding. They also have about 70 employees, not 120.