Sun’s Rock chip is going to revolutionize OLTP? Yeah, right.
Ted Dziuba offers a profane and passionate screed to the effect that it would be really, really wonderful if Sun’s forthcoming Rock chip magically revolutionized OLTP. His idea — if I may dignify it with that term — seems to be that by solving some programming issues in multithreading, Sun will achieve orders of magnitude performance improvements in DBMS processing, with MySQL as the beneficiary.
Frankly, I don’t know what in the world Dziuba is talking about, and I strongly suspect that neither does he. Wikipedia wasn’t terribly enlightening, except to point out that some of the ideas originated with Tom Knight, which is encouraging. Ars Technica has a decent article about the Rock chip, but it’s hard to find support for Dziuba’s enthusiasm in their more sober discussion.
Comments
4 Responses to “Sun’s Rock chip is going to revolutionize OLTP? Yeah, right.”
Leave a Reply
http://research.sun.com/scalable/pubs/ASPLOS2006.pdf
has a lot more info on HyTM. The BDB benchmark is pretty ludicrous as a prototype for general performance improvements: they construct a trivial microbenchmark in which there is needless contention for a global lock in the original BDB implementation. They then show how the microbenchmark runs with HyTM (essentially optimistic locking), at least once they hack around a few other sources of contention manually.
My impression is that while transactional memory may be promising as a technique for reducing programming complexity, I doubt that it will magically yield enormous scalability improvements that couldn’t be realized in a well-designed thread-based implementation.
Neil,
Even more directly — I don’t think existing mature DBMS have enough conention issues, at least on small numbers of cores, for there to be that much potential speed-up to be had. And I’m not sure that the magic parts of the design work over more than a few cores at once.
I thought I’d chime in here, since transactional memory is something I know a bit about.
1. The goal of TM is not to improve performance for DBMS. Most DBMS have a tremendous amount of parallelism in them already, and TM doesn’t help things there.
2. TM is about simplifying the programming model for developers. Most programmers cannot write multithreaded code safely, so TM removes that challenge. Unfortunately, in Sun’s implementation, you have to explicitly change your software to use TM, which doesn’t seem like the right approach to me.
3. TM may also enable substantial performance optimizations. Certainly, this is an area that folks in academia are pursuing.
DK
David Kanter asks whether transactional memory is for performance improvement, or whether it is for simplifying the programming model for developers. The essence of the claim of the paper is that its immediate effect is simplifying the programming model, but that leads to significant performance improvements.
The reason is that processors are going to have more and more cores in the near future. It is not easy to write software that keeps all of those cores busy doing useful work all the time. But if you make multi-core programming easier, then you’ll be able to take advantage of the cores and get more speed out of each processor.
Curt says “I’m not sure that the magic parts of the design work over more than a few cores at once.” The challenge is to figure out how to take the work that needs to be done, and distribute it over many cores running in parallel, so that you get the work done more quickly. The more cores there are, the more potential gain there is, if you can program something with that much concurrency and still get it to work.
Neil Conway’s comments end with “that couldn’t be realized in a well-designed thread-based implementation.” Again, the claim here is that it’s very hard to write these multi-threaded programs (where the threads are sharing non-read-only memory), but it’s a lot easier if you are using transactions that have very low overhead because of hardware-assist.
Curt, you’re right that it’s encouraging that Tom Knight was behind some of these ideas. And as far as famous names go, look at the acknowledgments section in the HTM paper: Detlefs, Heller, Herlihy, Sproull, and especially the ultra-awesome Guy Steele, who is working on a new language called Fortress that also depends on low-level transactions.
But what’s more, there are quite a lot of high-quality university researchers examining approaches to transactional memory. Prof. Anastasia Ailamaki of CMU talked about how DBMS’s are affected by many-core processors at the New England Database Day (Feb 4); it’s not just the cores, it’s also the way the caches work when there are so many cores. See http://www.cs.cmu.edu/~natassa/dbarch.html.
As for Dziuba, I’ll just say that there’s a huge difference between a promising technological idea and market success. Among many other things, will people write software that depends on Sun’s special architecture, in light of the way the industry is currently in a convergence mode regarding CPU architectures? (Where are the DEC Alphas of yesteryear?) As one of the co-founders of Symbolics, I am particularly sensitive to this issue. (Curt knows ALL about this!)
And it’s one thing to make a CPU that can do HTM, and another thing to write a full-functioned DBMS that can compete with existing market offerings, that have lots of useful features, lots of DBA’s for hire, lots of textbooks, and so on. If Mr. Dziuba thinks a better underlying technology translates easily into market success, I wonder how long he’s been around the computer industry.