eBay’s two enormous data warehouses
A few weeks ago, I had the chance to visit eBay, meet briefly with Oliver Ratzesberger and his team, and then catch up later with Oliver for dinner. I’ve already alluded to those discussions in a couple of posts, specifically on MapReduce (which eBay doesn’t like) and the astonishingly great difference between high- and low-end disk drives (to which eBay clued me in). Now I’m finally getting around to writing about the core of what we discussed, which is two of the very largest data warehouses in the world.
Metrics on eBay’s main Teradata data warehouse include:
- >2 petabytes of user data
- 10s of 1000s of users
- Millions of queries per day
- 72 nodes
- >140 GB/sec of I/O, or 2 GB/node/sec, or maybe that’s a peak when the workload is scan-heavy
- 100s of production databases being fed in
Metrics on eBay’s Greenplum data warehouse (or, if you like, data mart) include:
- 6 1/2 petabytes of user data
- 17 trillion records
- 150 billion new records/day, which seems to suggest an ingest rate well over 50 terabytes/day
- 96 nodes
- 200 MB/node/sec of I/O (that’s the order of magnitude difference that triggered my post on disk drives)
- 4.5 petabytes of storage
- 70% compression
- A small number of concurrent users
eBay’s Teradata installation is a full enterprise data warehouse. Besides size and scope, it is most notable for its implementation of Oliver’s misleadingly named analytics-as-a-service vision. In essence, eBay spins out dozens of virtual data marts, which:
- Combine views and aggregations on the central data warehouse with (optionally) additional “private” data the data mart user loads in.
- Are usually <5 terabytes in size, and indeed often <500 gigabytes.
- Can be created “instantaneously” by setting permissions, resource quotas, and the like.
The whole scheme relies heavily on Teradata’s workload management software to deliver with assurance on many SLAs (Service-Level Agreements) at once. Resource partitions are a key concept in all this.
So far as I can tell, eBay uses Greenplum to manage one kind of data — web and network event logs. These seem to be managed primarily at two levels of detail — Oliver said that the 17 trillion event detail records reduce to 1 trillion real event records. When I asked where the 17:1 ratio comes from, Oliver explained that a single web page click — which is what is memorialized in an event record — resulted in 50-150 details. That leaves a missing factor of 3-8X, but perhaps other less complex kinds of events are also mixed in.
The Greenplum metrics I quoted above represent over 100 days of data. Ultimately, eBay expects to keep 90-180 days of ultimate detail, and >1 years of event data. The 6 1/2 petabyte figure comes from dividing 2 terabytes of compressed data by (100%-70%). Since that all fits on a 4 1/2 petabyte system, I presume there’s only one level of mirroring (duh), not much temp space, and even less in the way of indexes.
Two uses of eBay’s Greenplum database are disclosed — whittling down from detailed to click-level event data, and sessionization. The latter seems to be done in batch runs and take 30 minutes per day. A couple of other uses are undisclosed. I assume eBay is doing something that requires UDFs (User-Defined Functions), because Oliver remarked that he likes the language choices offered by Greenplum’s Postgres-based UDF capability. But basically eBay’s Greenplum database is used for and evidently does very nicely at:
- Data ingest — it’s the first place log data goes
- Feeding the Teradata database
- A small number of big queries
eBay’s Teradata database handles the rest.
Related links:
- Wal-Mart, Bank of America, another financial services company, and Dell also have very large Teradata databases.
- Yahoo’s web/network events database, running on proprietary software, sounded about 1/6th the size of eBay’s Greenplum system when it was described about a year ago.
- Facebook has 2 1/2 terabytes managed by Hadoop — without a DBMS!
- Fox Interactive Media/MySpace has multi-hundred terabyte databases running on each of Greenplum and Aster Data nCluster.
- TEOCO has 100s of terabytes running on DATAllegro.
- To a probably lesser extent, the same is now also true of Dell.
- Vertica has a couple of unnamed customers with databases in the 200 terabyte range.
- In response to this post, Greenplum CTO Luke Lonergan quickly blogged about the eBay project. Other related posts on the same blog may follow.
Comments
48 Responses to “eBay’s two enormous data warehouses”
Leave a Reply
[…] Monash meets with Ebay’s Oliver Ratzesberger and gets us numbers on two of the world’s largest data warehouses in the world. Look at these Ebay […]
@Curt
I’m finding this I/O number very difficult to believe. If eBay has Teradata’s latest nodes, the 5550 series nodes, then each node could have two Quad Core Intel® Xeon® processor 5400 (2.33GHz) and the option of 4GB Quad Fibre Channel. Four 4GBFC would be 4 x 400MB/s = 1600MB/s max. By the math, it would be impossible to have a physical I/O rate of 2GB/s/node when running at max wire speed; it would allow only 1600MB/s/node. I would also comment that 2 Quad Core Xeon 5400 series processors would not be able to do anything but a SELECT COUNT(*) and ingest data at 2GB/s (or even 1600MB/s).
[…] eBay’s two enormous data warehouses | DBMS2 — DataBase Management System Services […]
[…] Ebay has a 6.5 petabyte Greenplum warehouse and a 2.5 petabyte Teradata warehouse. This system ingests hundreds of billions of new rows of data every day. Facebook has a 2.5 petabyte Hadoop system Yahoo has more than 1 petabyte running on their homemade system […]
Great to see this project out in the open Curt!
See my blog post about this.
Key important aspects of this from my perspective are:
– Full SQL analytics on 5+ PB of data, scaling to 10s of Petabytes
– Small footprint in datacenter 1/10 the power consumption and floorspace
WRT IO performance, the 200 MB/s is the rate at which compressed data is read from disk. The speed the SQL user sees is the net effective rate after decompression, which is the compression ratio times the physical rate. Measuring the disk IO rate is only telling us how fast the decompression is running in this case. With uncompressed data, these same systems perform disk IO at 2,000 MB/s while executing queries.
We now have an improved compression scheme that delivers much higher effective rates, which will also make the disk rate higher, though that’s mostly irrelevant.
– Luke
@Curt,
Nice writeup.
That said, I think bragging rights for size should not be how much is stored, we know that partitioning/sharding can allow you to inflate that number.
The bragging rights should be how much data can be processed in a *single query* (especially something like a global distinct, order-by or group-by over a long period of time).
Cheers,
— amr
Amr,
The only thing that limits querying the whole database in one go is the complexity of the schema. But in most cases — eBay’s Teradata EDW may be an exception — I’d guess that the largest fact table is well over half of the whole database size listed.
I’d like to see some appropriate measure between vendors, unfortunately the TPC benchmarks didn’t work for as long as we’d have all hoped.
As to Single Query speed – I think this only counts if that’s what your SLA needs are.
I’d much rather see a measure of throughput over the dev/ops people investment.
I agree with Amr, and, we win hands down.
🙂
– Luke
Los monstruosos datacenters de eBay (ENG)…
Alguna vez nos habremos preguntado cuál es el ancho de banda consumido por alguno de los gigantes de internet, como youtube, google y demás. Bien, pues aquí nos ofrecen datos muy concretos sobre los datacenter de eBay, donde se especifica, además, …
[…] went on to play Match again in Back to the Future Part II. [5] Per Curt Monash’s research blog entry as referenced by his Slashdot posting. [6] Wikipedia’s Petabyte entry. [7] Kevin […]
Greg Rahn notes “Four 4GBFC would be 4 x 400MB/s = 1600MB/s max”.
I went to the link and looked up the specs on the Teradata 5550 node. The data sheet says there are 3 PCI-X slots. It also says I/O can be 4 GB dual or quad fibre channel. My interpretation is dual or quad PCI-X adapter cards. With 3 PCI-X slots that means Teradata can have up to 12 Fibre Channel links per node for a theoretical bandwidth of 4.8 GB/s. The limiting factor is probably three 133 MHz PCI-X buses, which are 1066 MB/s apiece giving 3 GB/s per node.
Mr Rahn also says “I would also comment that 2 Quad Core Xeon 5400 series processors would not be able to do anything but a SELECT COUNT(*) and ingest data at 2GB/s (or even 1600MB/s).” But yet eBay says they are doing it – and a lot more.
Great to open the discussion of VLDB.
Without performance regards of end user queries, the size of DB is meaningless, at most is data storage. There are some limits of VLDB being set up by system considering both size and performance of DB. It would be nice to have some numbers for this.
Yan
@anonymous
I would agree with your interpretation of the dual/quad fibre channel [card]. To be honest, I would have never guessed anyone made a 4 port HBA but I guess LSI does: LSI7404XP-LC and given Teradata resells LSI Engenio storage it is likely they use their HBAs also. Given a 2 port 4GFC PCI-X HBA can deliver 80% of the slot bandwidth, it seems like a bit of a waste to go to 4 ports, at least for performance. For connectivity, perhaps, which is why I believe Teradata may use it – for their cliques.
The other reason for my comment of a max of 1600MB/s per node is that the LSI Engenio 6998 array only does a max of 1600MB/s (per LSI’s presentation) and the 7900 array is quite new so it would seem doubtful that eBay uses that one. They may have opted for the EMC DMX storage, but I would think that would be an extremely costly solution at 72 nodes.
I may soften up a bit on the I/O rate also. Curt’s comment “maybe that’s a peak when the workload is scan-heavy” is probably correct. Peak rate vs. sustained, I could see that. Maybe they do some light scans of some large de-normalized table making it peak out at 2GB/s per node. But that number still seems quite high at 250MB/s per CPU core. Doing group bys and aggregation I’m sure that number drops fast.
I think the interesting, and unmentioned data point, is how many hard drives are in this 72 node config to deliver this I/O number. eBays’s own Michael McIntire reports that Teradata I/O is all random so the MBPS rate per drive is probably somewhere around 30MB/s (give or take). My guess is that there is somewhere between 4500 and 5000 HDDs (around 64 HDDs per node).
[…] eBay has a 6 1/2 petabyte database running on Greenplum and a 2 1/2 petabyte enterprise data warehouse running on Teradata. […]
[…] Monash posted that eBay hosts a 6.5 petabyte Greenplum database on 96 […]
[…] million users on Skype, eBay has a massive data center infrastructure. The company houses more than 8.5 petabytes of data in huge data warehouses. We’re not certain what kind of server count this requires, but it’s certainly in the […]
Yahoo’s main data warehouse was up to 3 petabytes compressed at the end of 2007.
[…] most important reference is probably its energetic advocate Fox Interactive Media, even ahead of much larger user Greenplum user eBay, and notwithstanding Aster Data’s large presence in Fox subsidiary MySpace. I just ran across […]
[…] 6.5 Petabytes of data eBay runs the world’s largest data warehouse on Greenplum. Facebook runs a 2 PB warehouse on […]
[…] not that relational databases can’t scale – in fact, they can and do scale to petabytes, as those who know Fortune 500 enterprise computing can attest . The problem is that relational databases don’t scale easily – and require a lot of ETL […]
[…] mean for the enterprise, but they have had big data for a long time. eBay manages petabytes in its Teradata and Greenplum data warehouses. Sophisticated startups extracting value from big data is also nothing new—it has […]
[…] mean for the enterprise, but they have had big data for a long time. eBay manages petabytes in its Teradata and Greenplum data warehouses. Sophisticated startups extracting value from big data is also nothing new—it has […]
[…] mean for the enterprise, but they have had big data for a long time. eBay manages petabytes in its Teradata and Greenplum data warehouses. Sophisticated startups extracting value from big data is also nothing new—it has […]
Does ebay really use Greenplum? I was told they no longer use Greenplum. Can someone comment?
Every time I check, eBay is still using Greenplum.
[…] mean for the enterprise, but they have had big data for a long time. eBay manages petabytes in its Teradata and Greenplum data warehouses. Sophisticated startups extracting value from big data is also nothing new—it has […]
[…] mean for the enterprise, but they have had big data for a long time. eBay manages petabytes in its Teradata and Greenplum data warehouses. Sophisticated startups extracting value from big data is also nothing new—it has […]
[…] Netezza customer has been rapidly spinning out virtual data marts, in a manner somewhat akin to eBay’s virtual data mart/”analytics-as-a-service” strategy* since 2004. However, the whole thing isn’t necessarily as slick as what eBay has going. This […]
[…] mean for the enterprise, but they have had big data for a long time. eBay manages petabytes in its Teradata and Greenplum data warehouses. Sophisticated startups extracting value from big data is also nothing new—it has been happening […]
[…] has thrown out Greenplum. eBay’s 6 ½ petabyte Greenplum database has turned into a >10 petabyte Teradata database, which will grow 2 1/2x further in size […]
[…] of data warehouses reaching the petabyte-level thanks to advances in parallel processing (think eBay’s two massive data warehouses, one each run on Teradata’s and Greenplum’s platforms.) But not every enterprise has eBay’s […]
[…] not that relational databases can’t scale – in fact, they can scale to petabytes, as those who know Fortune 500 enterprise computing can attest . The problem is that relational databases require lots of ETL cruft to munge fluid blobs of data […]
I liked the article.Infact it assisted me on my presentation.Thanks alot
[…] – EMC Delivers Hadoop ‘Big Data’ Analytics to the Enterprise eBay’s two enormous data warehouses More on Fox Interactive Media’s use of […]
[…] eBay 在2009年就有兩個巨大的資料倉儲,分別使用了GreenPlum與Terradata的解決方案,也都是管理好幾個PB的資料量。更使人尊敬的是他們乘載每秒好幾百MB或超過GB的I/O量。還有許多大型公司在2008年就擁有PB等級的資料倉儲。 […]
[…] the 21st most visited website in the world (7th in the US) according to Alexa.com. They store about 8.5 Petabytes of user data across two […]
[…] small number of "concurrent users”. Fonte: greenpum at ebay Greenplum è disponibile in tre […]
This text is worth everyone’s attention. Where can I
find out more?
[…] Monash, Curt (30 April 2009). “eBay’s two enormous data warehouses” […]
[…] Oliver Ratzesberger runs Teradata’s software development. […]
bitcoin optimizer
blog topic
The fraudster will perpetually be behind the system by spending the identical block peak.
Cut buying and selling capital a hedge against the system stated one Bitcoin on Coinmarketcap Alexandria as we.
Selecting an instrument in the list will give you the options to trade, see the
chart, as well as get more details.
Hi there, I would like to subscribe for this web site to
get most recent updates, therefore where can i do it please assist.
always i used to read smaller articles which also clear their motive, and that is also
happening with this piece of writing which I am
reading here.
There are person markets for betting on rounds – when the
fight will end or the normal more than/below rounds.
If you’re questioning how to generate a blog, you have come to the
suitable location.